Jan 20 01:05:38.827110 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 01:05:38.827180 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:05:38.827203 kernel: BIOS-provided physical RAM map: Jan 20 01:05:38.827214 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 01:05:38.827225 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 01:05:38.827234 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 01:05:38.827247 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 01:05:38.827259 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 01:05:38.827400 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 01:05:38.827413 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 01:05:38.827423 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:05:38.827437 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 01:05:38.827446 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:05:38.827455 kernel: NX (Execute Disable) protection: active Jan 20 01:05:38.827466 kernel: APIC: Static calls initialized Jan 20 01:05:38.827475 kernel: SMBIOS 2.8 present. Jan 20 01:05:38.827603 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 01:05:38.832142 kernel: DMI: Memory slots populated: 1/1 Jan 20 01:05:38.832155 kernel: Hypervisor detected: KVM Jan 20 01:05:38.832166 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 01:05:38.832177 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 01:05:38.832188 kernel: kvm-clock: using sched offset of 48835904058 cycles Jan 20 01:05:38.832200 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:05:38.832212 kernel: tsc: Detected 2445.426 MHz processor Jan 20 01:05:38.832223 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 01:05:38.832234 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 01:05:38.832254 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 01:05:38.832266 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 01:05:38.832277 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 01:05:38.832288 kernel: Using GB pages for direct mapping Jan 20 01:05:38.832300 kernel: ACPI: Early table checksum verification disabled Jan 20 01:05:38.832311 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 01:05:38.832322 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:05:38.832333 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:05:38.832344 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:05:38.832359 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 01:05:38.832369 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:05:38.832380 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:05:38.832391 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:05:38.832403 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:05:38.832549 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 01:05:38.832563 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 01:05:38.832572 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 01:05:38.832582 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 01:05:38.832591 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 01:05:38.832601 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 01:05:38.832972 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 01:05:38.832983 kernel: No NUMA configuration found Jan 20 01:05:38.832993 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 01:05:38.833007 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 01:05:38.833017 kernel: Zone ranges: Jan 20 01:05:38.833027 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 01:05:38.833037 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 01:05:38.833046 kernel: Normal empty Jan 20 01:05:38.833055 kernel: Device empty Jan 20 01:05:38.833065 kernel: Movable zone start for each node Jan 20 01:05:38.833074 kernel: Early memory node ranges Jan 20 01:05:38.833084 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 01:05:38.833097 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 01:05:38.833106 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 01:05:38.833116 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:05:38.833125 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 01:05:38.833264 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 01:05:38.833280 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 01:05:38.833291 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 01:05:38.833301 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 01:05:38.833311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 01:05:38.833445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 01:05:38.833457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 01:05:38.833467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 01:05:38.833478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 01:05:38.833488 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 01:05:38.833498 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 01:05:38.833509 kernel: TSC deadline timer available Jan 20 01:05:38.833519 kernel: CPU topo: Max. logical packages: 1 Jan 20 01:05:38.833530 kernel: CPU topo: Max. logical dies: 1 Jan 20 01:05:38.833548 kernel: CPU topo: Max. dies per package: 1 Jan 20 01:05:38.833559 kernel: CPU topo: Max. threads per core: 1 Jan 20 01:05:38.833571 kernel: CPU topo: Num. cores per package: 4 Jan 20 01:05:38.833583 kernel: CPU topo: Num. threads per package: 4 Jan 20 01:05:38.833595 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 01:05:38.839306 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 01:05:38.839335 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 01:05:38.839348 kernel: kvm-guest: setup PV sched yield Jan 20 01:05:38.839362 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 01:05:38.839381 kernel: Booting paravirtualized kernel on KVM Jan 20 01:05:38.839393 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 01:05:38.839405 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 01:05:38.839416 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 01:05:38.839428 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 01:05:38.839441 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 01:05:38.839452 kernel: kvm-guest: PV spinlocks enabled Jan 20 01:05:38.839466 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 01:05:38.839480 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:05:38.839500 kernel: random: crng init done Jan 20 01:05:38.839513 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:05:38.839525 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:05:38.839536 kernel: Fallback order for Node 0: 0 Jan 20 01:05:38.839547 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 01:05:38.839558 kernel: Policy zone: DMA32 Jan 20 01:05:38.839571 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:05:38.839583 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 01:05:38.839596 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 01:05:38.839746 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 01:05:38.840008 kernel: Dynamic Preempt: voluntary Jan 20 01:05:38.840022 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:05:38.840044 kernel: rcu: RCU event tracing is enabled. Jan 20 01:05:38.840057 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 01:05:38.840070 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:05:38.840205 kernel: Rude variant of Tasks RCU enabled. Jan 20 01:05:38.840220 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:05:38.840233 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:05:38.840251 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 01:05:38.840264 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:05:38.840277 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:05:38.840289 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:05:38.840302 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 01:05:38.840314 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:05:38.840343 kernel: Console: colour VGA+ 80x25 Jan 20 01:05:38.840357 kernel: printk: legacy console [ttyS0] enabled Jan 20 01:05:38.840370 kernel: ACPI: Core revision 20240827 Jan 20 01:05:38.840383 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 01:05:38.840396 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 01:05:38.840414 kernel: x2apic enabled Jan 20 01:05:38.840432 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 01:05:38.840578 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 01:05:38.840595 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 01:05:38.842154 kernel: kvm-guest: setup PV IPIs Jan 20 01:05:38.842179 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 01:05:38.842192 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:05:38.842203 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 01:05:38.842215 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 01:05:38.842226 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 01:05:38.842238 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 01:05:38.842249 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 01:05:38.842261 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 01:05:38.842273 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 01:05:38.842288 kernel: Speculative Store Bypass: Vulnerable Jan 20 01:05:38.842299 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 01:05:38.842312 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 01:05:38.842324 kernel: active return thunk: srso_alias_return_thunk Jan 20 01:05:38.842335 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 01:05:38.842346 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 01:05:38.842358 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 01:05:38.842369 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 01:05:38.842384 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 01:05:38.842395 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 01:05:38.842520 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 01:05:38.842534 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 01:05:38.842546 kernel: Freeing SMP alternatives memory: 32K Jan 20 01:05:38.842557 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:05:38.842569 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 01:05:38.842580 kernel: landlock: Up and running. Jan 20 01:05:38.842591 kernel: SELinux: Initializing. Jan 20 01:05:38.842737 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:05:38.854210 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:05:38.854373 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 01:05:38.854388 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 01:05:38.854400 kernel: signal: max sigframe size: 1776 Jan 20 01:05:38.854413 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:05:38.854428 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:05:38.854440 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 01:05:38.854450 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 01:05:38.854468 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:05:38.854478 kernel: smpboot: x86: Booting SMP configuration: Jan 20 01:05:38.854488 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 01:05:38.854498 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 01:05:38.854508 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 01:05:38.854520 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145096K reserved, 0K cma-reserved) Jan 20 01:05:38.854534 kernel: devtmpfs: initialized Jan 20 01:05:38.854545 kernel: x86/mm: Memory block size: 128MB Jan 20 01:05:38.854555 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:05:38.854569 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 01:05:38.854579 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:05:38.854589 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:05:38.854599 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:05:38.857133 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:05:38.857148 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 01:05:38.857160 kernel: audit: type=2000 audit(1768871086.911:1): state=initialized audit_enabled=0 res=1 Jan 20 01:05:38.857171 kernel: cpuidle: using governor menu Jan 20 01:05:38.857181 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:05:38.857198 kernel: dca service started, version 1.12.1 Jan 20 01:05:38.857208 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 01:05:38.857219 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 01:05:38.857230 kernel: PCI: Using configuration type 1 for base access Jan 20 01:05:38.857242 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 01:05:38.860403 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:05:38.860421 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:05:38.860437 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:05:38.860448 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:05:38.860466 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:05:38.860476 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:05:38.860489 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:05:38.860501 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:05:38.860511 kernel: ACPI: Interpreter enabled Jan 20 01:05:38.860522 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 01:05:38.860532 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 01:05:38.860542 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 01:05:38.860552 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 01:05:38.860566 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 01:05:38.860576 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 01:05:38.898299 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 01:05:38.898538 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 01:05:38.899130 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 01:05:38.899151 kernel: PCI host bridge to bus 0000:00 Jan 20 01:05:38.907204 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 01:05:38.907452 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 01:05:38.917996 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 01:05:38.918231 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 01:05:38.918408 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 01:05:38.918587 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 01:05:38.919182 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 01:05:38.926116 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 01:05:38.931006 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 01:05:38.931218 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 01:05:38.931403 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 01:05:39.070233 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 01:05:39.070492 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 01:05:39.071096 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 21484 usecs Jan 20 01:05:39.088213 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 01:05:39.088494 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 01:05:39.093427 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 01:05:39.094041 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 01:05:39.101966 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 01:05:39.102263 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 01:05:39.102468 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 01:05:39.108034 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 01:05:39.117029 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 01:05:39.117342 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 01:05:39.117538 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 01:05:39.118084 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 01:05:39.118275 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 01:05:39.125600 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 01:05:39.132431 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 01:05:39.133906 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 17578 usecs Jan 20 01:05:39.134384 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 01:05:39.134579 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 01:05:39.138337 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 01:05:39.146409 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 01:05:39.147152 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 01:05:39.147172 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 01:05:39.147184 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 01:05:39.147194 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 01:05:39.147204 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 01:05:39.147214 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 01:05:39.147224 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 01:05:39.147236 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 01:05:39.147248 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 01:05:39.147264 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 01:05:39.147275 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 01:05:39.147285 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 01:05:39.147294 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 01:05:39.147304 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 01:05:39.147314 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 01:05:39.147324 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 01:05:39.147334 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 01:05:39.147344 kernel: iommu: Default domain type: Translated Jan 20 01:05:39.147361 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 01:05:39.147374 kernel: PCI: Using ACPI for IRQ routing Jan 20 01:05:39.147384 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 01:05:39.147394 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 01:05:39.147404 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 01:05:39.147595 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 01:05:39.157228 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 01:05:39.157429 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 01:05:39.157459 kernel: vgaarb: loaded Jan 20 01:05:39.157471 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 01:05:39.157481 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 01:05:39.157491 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 01:05:39.157501 kernel: hrtimer: interrupt took 13445787 ns Jan 20 01:05:39.157512 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:05:39.157522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:05:39.157532 kernel: pnp: PnP ACPI init Jan 20 01:05:39.179329 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 01:05:39.179401 kernel: pnp: PnP ACPI: found 6 devices Jan 20 01:05:39.179417 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 01:05:39.179428 kernel: NET: Registered PF_INET protocol family Jan 20 01:05:39.179438 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:05:39.179448 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:05:39.179458 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:05:39.179469 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:05:39.179479 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:05:39.179494 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:05:39.179503 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:05:39.179514 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:05:39.179525 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:05:39.179538 kernel: NET: Registered PF_XDP protocol family Jan 20 01:05:39.188062 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 01:05:39.188298 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 01:05:39.188481 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 01:05:39.189056 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 01:05:39.189256 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 01:05:39.189431 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 01:05:39.189448 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:05:39.189461 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:05:39.189471 kernel: Initialise system trusted keyrings Jan 20 01:05:39.189483 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:05:39.189493 kernel: Key type asymmetric registered Jan 20 01:05:39.189504 kernel: Asymmetric key parser 'x509' registered Jan 20 01:05:39.189516 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:05:39.189533 kernel: io scheduler mq-deadline registered Jan 20 01:05:39.189544 kernel: io scheduler kyber registered Jan 20 01:05:39.189554 kernel: io scheduler bfq registered Jan 20 01:05:39.189567 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 01:05:39.189580 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 01:05:39.189590 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 01:05:39.189600 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 01:05:39.198922 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:05:39.198980 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 01:05:39.199003 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 01:05:39.199018 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 01:05:39.199031 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 01:05:39.204295 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 01:05:39.204332 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 01:05:39.204536 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 01:05:39.212587 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T01:05:32 UTC (1768871132) Jan 20 01:05:39.213214 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 01:05:39.213246 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 01:05:39.213258 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:05:39.213268 kernel: Segment Routing with IPv6 Jan 20 01:05:39.213278 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:05:39.213288 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:05:39.213298 kernel: Key type dns_resolver registered Jan 20 01:05:39.213308 kernel: IPI shorthand broadcast: enabled Jan 20 01:05:39.213321 kernel: sched_clock: Marking stable (35692151513, 5061318206)->(46747983001, -5994513282) Jan 20 01:05:39.213335 kernel: registered taskstats version 1 Jan 20 01:05:39.213350 kernel: Loading compiled-in X.509 certificates Jan 20 01:05:39.213360 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 01:05:39.213370 kernel: Demotion targets for Node 0: null Jan 20 01:05:39.213380 kernel: Key type .fscrypt registered Jan 20 01:05:39.213389 kernel: Key type fscrypt-provisioning registered Jan 20 01:05:39.213399 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:05:39.213410 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:05:39.213420 kernel: ima: No architecture policies found Jan 20 01:05:39.213435 kernel: clk: Disabling unused clocks Jan 20 01:05:39.213446 kernel: Warning: unable to open an initial console. Jan 20 01:05:39.218160 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 01:05:39.218186 kernel: Write protecting the kernel read-only data: 40960k Jan 20 01:05:39.218197 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 01:05:39.218207 kernel: Run /init as init process Jan 20 01:05:39.218217 kernel: with arguments: Jan 20 01:05:39.218228 kernel: /init Jan 20 01:05:39.218241 kernel: with environment: Jan 20 01:05:39.218253 kernel: HOME=/ Jan 20 01:05:39.218275 kernel: TERM=linux Jan 20 01:05:39.218290 systemd[1]: Successfully made /usr/ read-only. Jan 20 01:05:39.218308 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:05:39.218323 systemd[1]: Detected virtualization kvm. Jan 20 01:05:39.218336 systemd[1]: Detected architecture x86-64. Jan 20 01:05:39.218349 systemd[1]: Running in initrd. Jan 20 01:05:39.218360 systemd[1]: No hostname configured, using default hostname. Jan 20 01:05:39.218379 systemd[1]: Hostname set to . Jan 20 01:05:39.218407 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:05:39.218422 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:05:39.218434 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:05:39.218445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:05:39.218459 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:05:39.218477 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:05:39.218494 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:05:39.218506 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:05:39.218519 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:05:39.218530 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:05:39.218541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:05:39.218556 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:05:39.218567 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:05:39.218577 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:05:39.218589 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:05:39.218603 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:05:39.224558 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:05:39.224582 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:05:39.224595 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:05:39.224606 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 01:05:39.231193 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:05:39.231212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:05:39.231224 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:05:39.231235 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:05:39.231253 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:05:39.231264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:05:39.231275 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:05:39.231286 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 01:05:39.231302 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:05:39.231313 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:05:39.231324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:05:39.231335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:05:39.231407 systemd-journald[202]: Collecting audit messages is disabled. Jan 20 01:05:39.231439 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:05:39.231453 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:05:39.231468 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:05:39.231480 systemd-journald[202]: Journal started Jan 20 01:05:39.231502 systemd-journald[202]: Runtime Journal (/run/log/journal/ef4234d4dba84815aec80d78501cc320) is 6M, max 48.3M, 42.2M free. Jan 20 01:05:39.412144 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:05:39.440530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:05:39.523115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:05:39.527205 systemd-modules-load[204]: Inserted module 'overlay' Jan 20 01:05:39.876462 systemd-tmpfiles[214]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 01:05:46.770436 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:05:46.770643 kernel: Bridge firewalling registered Jan 20 01:05:39.902207 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:05:40.713076 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 20 01:05:46.015382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:05:46.033302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:05:46.033651 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:05:46.120299 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:05:46.159471 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:05:46.180465 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:05:47.374405 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:05:47.428512 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:05:47.685718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:05:47.778469 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:05:47.912589 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:05:48.093346 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:05:48.732701 systemd-resolved[230]: Positive Trust Anchors: Jan 20 01:05:48.741258 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:05:48.741310 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:05:48.767692 systemd-resolved[230]: Defaulting to hostname 'linux'. Jan 20 01:05:48.815324 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:05:49.164105 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:05:49.975168 kernel: SCSI subsystem initialized Jan 20 01:05:50.233254 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:05:50.642495 kernel: iscsi: registered transport (tcp) Jan 20 01:05:50.835928 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:05:50.836024 kernel: QLogic iSCSI HBA Driver Jan 20 01:05:51.343451 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:05:51.687447 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:05:51.747475 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:05:52.814373 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:05:52.895255 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:05:54.087648 kernel: raid6: avx2x4 gen() 10669 MB/s Jan 20 01:05:54.117457 kernel: raid6: avx2x2 gen() 7686 MB/s Jan 20 01:05:54.157623 kernel: raid6: avx2x1 gen() 6575 MB/s Jan 20 01:05:54.157736 kernel: raid6: using algorithm avx2x4 gen() 10669 MB/s Jan 20 01:05:54.220520 kernel: raid6: .... xor() 1969 MB/s, rmw enabled Jan 20 01:05:54.220618 kernel: raid6: using avx2x2 recovery algorithm Jan 20 01:05:54.603399 kernel: xor: automatically using best checksumming function avx Jan 20 01:05:58.269151 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:05:58.599556 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:05:58.845308 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:05:59.805698 systemd-udevd[451]: Using default interface naming scheme 'v255'. Jan 20 01:05:59.927413 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:06:00.033565 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:06:00.440185 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jan 20 01:06:01.315453 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:06:01.372373 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:06:02.824679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:06:03.207727 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:06:03.953744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:06:03.955528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:06:04.042492 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:06:04.236471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:06:04.266702 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:06:04.371504 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 01:06:04.442589 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 01:06:04.508553 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 01:06:04.508649 kernel: GPT:9289727 != 19775487 Jan 20 01:06:04.508666 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 01:06:04.508680 kernel: GPT:9289727 != 19775487 Jan 20 01:06:04.508693 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 01:06:04.525345 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:06:04.709706 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 01:06:05.527457 kernel: libata version 3.00 loaded. Jan 20 01:06:05.795501 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 01:06:09.077427 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 01:06:09.085375 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 01:06:09.085408 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 01:06:09.085428 kernel: AES CTR mode by8 optimization enabled Jan 20 01:06:09.085447 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 01:06:09.085919 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 01:06:09.088693 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 01:06:09.089116 kernel: scsi host0: ahci Jan 20 01:06:09.089411 kernel: scsi host1: ahci Jan 20 01:06:09.089661 kernel: scsi host2: ahci Jan 20 01:06:09.091487 kernel: scsi host3: ahci Jan 20 01:06:09.092246 kernel: scsi host4: ahci Jan 20 01:06:09.092522 kernel: scsi host5: ahci Jan 20 01:06:09.099152 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 20 01:06:09.099180 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 20 01:06:09.099195 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 20 01:06:09.099209 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 20 01:06:09.099223 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 20 01:06:09.099242 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 20 01:06:09.099258 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:09.099284 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 01:06:09.099298 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:09.099312 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:09.099330 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:06:09.099344 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 01:06:09.099358 kernel: ata3.00: applying bridge limits Jan 20 01:06:09.099372 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:09.099385 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 01:06:09.099399 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:06:09.099415 kernel: ata3.00: configured for UDMA/100 Jan 20 01:06:09.099428 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 01:06:09.102322 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 01:06:09.102550 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:06:09.102567 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 01:06:09.173267 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 01:06:09.195307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:06:09.391727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:06:09.478392 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 01:06:09.491629 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 01:06:09.580230 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:06:09.657295 disk-uuid[627]: Primary Header is updated. Jan 20 01:06:09.657295 disk-uuid[627]: Secondary Entries is updated. Jan 20 01:06:09.657295 disk-uuid[627]: Secondary Header is updated. Jan 20 01:06:09.725836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:06:10.668455 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:06:10.715166 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:06:10.818512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:06:10.818548 disk-uuid[628]: The operation has completed successfully. Jan 20 01:06:10.729686 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:06:10.746703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:06:10.809998 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:06:11.236473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:06:11.341264 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:06:11.353928 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:06:11.633250 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:06:11.739276 sh[652]: Success Jan 20 01:06:11.879655 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:06:11.902482 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:06:11.902547 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 01:06:12.151344 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 01:06:12.575706 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:06:12.646520 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:06:12.812026 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:06:12.968232 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (664) Jan 20 01:06:13.018907 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 01:06:13.018996 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:06:13.294897 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:06:13.296520 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 01:06:13.334902 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:06:13.430354 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:06:13.550307 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:06:13.627521 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:06:13.680508 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:06:14.384900 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (697) Jan 20 01:06:14.477285 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:06:14.477366 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:06:14.711462 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:06:14.711744 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:06:14.951906 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:06:15.098702 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:06:15.218192 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:06:19.830399 ignition[753]: Ignition 2.22.0 Jan 20 01:06:19.830692 ignition[753]: Stage: fetch-offline Jan 20 01:06:19.830973 ignition[753]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:06:19.830992 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:06:19.831437 ignition[753]: parsed url from cmdline: "" Jan 20 01:06:19.831443 ignition[753]: no config URL provided Jan 20 01:06:19.831451 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:06:19.831465 ignition[753]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:06:19.831739 ignition[753]: op(1): [started] loading QEMU firmware config module Jan 20 01:06:19.831747 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 01:06:20.253303 ignition[753]: op(1): [finished] loading QEMU firmware config module Jan 20 01:06:22.815277 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:06:23.280715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:06:23.928587 ignition[753]: parsing config with SHA512: 83f3d020689b6dc4d7a883d0fe53e1adc7eaf9c39ca553a3ef2a024303e99fb8f6d8d8c68fad0448e5111f4b23eed03c50c2a3549d2faaf6df983b4b867068bc Jan 20 01:06:24.648039 unknown[753]: fetched base config from "system" Jan 20 01:06:24.648058 unknown[753]: fetched user config from "qemu" Jan 20 01:06:24.650683 ignition[753]: fetch-offline: fetch-offline passed Jan 20 01:06:24.723974 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:06:24.651165 ignition[753]: Ignition finished successfully Jan 20 01:06:25.377135 systemd-networkd[841]: lo: Link UP Jan 20 01:06:25.377698 systemd-networkd[841]: lo: Gained carrier Jan 20 01:06:25.556388 systemd-networkd[841]: Enumeration completed Jan 20 01:06:25.571147 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:06:25.695976 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:06:25.695993 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:06:25.801732 systemd-networkd[841]: eth0: Link UP Jan 20 01:06:25.818667 systemd-networkd[841]: eth0: Gained carrier Jan 20 01:06:25.818962 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:06:25.853021 systemd[1]: Reached target network.target - Network. Jan 20 01:06:26.098112 systemd-networkd[841]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:06:26.103596 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 01:06:26.113094 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:06:27.174742 ignition[846]: Ignition 2.22.0 Jan 20 01:06:27.174970 ignition[846]: Stage: kargs Jan 20 01:06:27.175356 ignition[846]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:06:27.242526 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:06:27.175380 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:06:27.318364 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:06:27.177010 ignition[846]: kargs: kargs passed Jan 20 01:06:27.177092 ignition[846]: Ignition finished successfully Jan 20 01:06:27.690182 systemd-networkd[841]: eth0: Gained IPv6LL Jan 20 01:06:27.746707 ignition[854]: Ignition 2.22.0 Jan 20 01:06:27.748559 ignition[854]: Stage: disks Jan 20 01:06:27.759632 ignition[854]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:06:27.759649 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:06:27.795118 ignition[854]: disks: disks passed Jan 20 01:06:27.839091 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:06:27.795208 ignition[854]: Ignition finished successfully Jan 20 01:06:27.907462 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:06:28.072192 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:06:28.137600 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:06:28.233983 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:06:28.274554 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:06:28.377656 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:06:29.018471 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 01:06:29.083616 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:06:29.229745 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:06:32.139666 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 01:06:32.174011 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:06:32.262111 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:06:32.411601 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:06:32.468259 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:06:32.518977 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 01:06:32.519075 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:06:32.519124 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:06:33.154679 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:06:33.223012 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:06:33.403204 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (873) Jan 20 01:06:33.549158 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:06:33.585394 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:06:33.787592 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:06:33.787691 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:06:33.824131 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:06:34.418741 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:06:34.548635 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:06:34.741152 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:06:34.830083 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:06:37.089662 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:06:37.195153 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:06:37.264596 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:06:37.482111 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:06:37.543489 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:06:38.088742 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:06:39.056228 ignition[988]: INFO : Ignition 2.22.0 Jan 20 01:06:39.056228 ignition[988]: INFO : Stage: mount Jan 20 01:06:39.270060 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:06:39.270060 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:06:39.270060 ignition[988]: INFO : mount: mount passed Jan 20 01:06:39.270060 ignition[988]: INFO : Ignition finished successfully Jan 20 01:06:39.118134 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:06:39.173710 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:06:39.557316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:06:39.941716 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1000) Jan 20 01:06:40.050473 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:06:40.050565 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:06:40.295685 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:06:40.295972 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:06:40.327145 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:06:40.772110 ignition[1017]: INFO : Ignition 2.22.0 Jan 20 01:06:40.772110 ignition[1017]: INFO : Stage: files Jan 20 01:06:40.842270 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:06:40.842270 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:06:40.842270 ignition[1017]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:06:40.842270 ignition[1017]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:06:40.842270 ignition[1017]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:06:41.316615 ignition[1017]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:06:41.316615 ignition[1017]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:06:41.316615 ignition[1017]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:06:41.316615 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 01:06:41.316615 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 01:06:40.910183 unknown[1017]: wrote ssh authorized keys file for user: core Jan 20 01:06:42.402217 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:06:47.514000 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1005542207 wd_nsec: 1005541739 Jan 20 01:06:48.519213 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:06:48.702733 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:06:49.250686 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:06:49.250686 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:06:49.250686 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 01:06:49.618247 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:07:07.432468 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:07:07.432468 ignition[1017]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:07:07.772313 ignition[1017]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:07:07.772313 ignition[1017]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:07:07.772313 ignition[1017]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:07:07.772313 ignition[1017]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 01:07:07.772313 ignition[1017]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:07:07.772313 ignition[1017]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:07:07.772313 ignition[1017]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 01:07:07.772313 ignition[1017]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 01:07:09.120051 ignition[1017]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:07:09.308230 ignition[1017]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:07:09.308230 ignition[1017]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 01:07:09.308230 ignition[1017]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:07:09.308230 ignition[1017]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:07:09.308230 ignition[1017]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:07:09.308230 ignition[1017]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:07:09.308230 ignition[1017]: INFO : files: files passed Jan 20 01:07:09.308230 ignition[1017]: INFO : Ignition finished successfully Jan 20 01:07:09.502032 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:07:09.782989 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:07:10.392669 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:07:10.589149 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:07:10.671629 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:07:11.093547 initrd-setup-root-after-ignition[1048]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 01:07:11.364167 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:07:11.487051 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:07:11.487051 initrd-setup-root-after-ignition[1050]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:07:11.407568 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:07:11.499185 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:07:11.785234 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:07:13.234526 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:07:13.320518 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:07:13.640534 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:07:13.779712 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:07:13.888617 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:07:14.015548 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:07:15.222650 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:07:15.450685 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:07:16.248073 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:07:16.835536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:07:16.957459 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:07:17.120586 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:07:17.122585 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:07:17.613310 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:07:17.710389 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:07:17.750591 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:07:17.751217 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:07:17.751368 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:07:17.751497 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:07:17.751619 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:07:17.751747 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:07:17.827657 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:07:17.828434 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:07:17.917549 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:07:17.917693 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:07:17.925370 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:07:18.566168 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:07:18.852380 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:07:18.953453 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:07:18.986434 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:07:19.333474 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:07:19.599540 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:07:20.215574 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:07:20.338485 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:07:20.452609 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:07:20.559252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:07:20.638459 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:07:20.777727 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:07:20.830743 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:07:21.104540 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:07:21.110485 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:07:21.294379 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:07:21.302287 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:07:21.517605 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:07:21.521718 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:07:21.642589 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:07:21.648498 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:07:21.852242 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:07:21.894301 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:07:21.895251 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:07:21.921542 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:07:22.128396 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:07:22.129605 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:07:22.296745 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:07:22.297579 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:07:22.587527 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:07:22.593485 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:07:22.817556 ignition[1074]: INFO : Ignition 2.22.0 Jan 20 01:07:22.897220 ignition[1074]: INFO : Stage: umount Jan 20 01:07:22.897220 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:07:22.897220 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:07:22.897220 ignition[1074]: INFO : umount: umount passed Jan 20 01:07:22.897220 ignition[1074]: INFO : Ignition finished successfully Jan 20 01:07:22.912705 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:07:22.913371 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:07:23.087387 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:07:23.110350 systemd[1]: Stopped target network.target - Network. Jan 20 01:07:23.188363 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:07:23.188638 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:07:23.253537 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:07:23.253713 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:07:23.306324 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:07:23.306448 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:07:23.626464 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:07:23.626577 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:07:23.629746 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:07:24.026711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:07:24.125453 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:07:24.125639 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:07:24.324343 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:07:24.330328 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:07:24.532500 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 01:07:24.533470 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:07:24.539655 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:07:24.870284 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 01:07:24.912719 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 01:07:25.555531 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:07:25.555645 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:07:25.585544 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:07:25.585677 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:07:25.616663 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:07:25.621399 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:07:25.621501 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:07:25.621632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:07:25.621702 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:07:25.832337 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:07:25.832466 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:07:25.950269 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:07:25.950385 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:07:26.101333 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:07:26.287697 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 01:07:26.296359 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:07:26.550136 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:07:26.552418 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:07:26.629616 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:07:26.629730 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:07:26.719451 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:07:26.722332 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:07:27.614638 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:07:27.615077 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:07:27.909671 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:07:27.910131 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:07:28.021360 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:07:28.021492 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:07:28.042603 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:07:28.047304 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 01:07:28.047409 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:07:28.240590 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:07:28.240702 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:07:28.391431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:07:28.391545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:07:28.977747 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 01:07:28.980498 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 01:07:28.989675 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:07:28.991488 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:07:28.994319 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:07:29.009233 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:07:29.009403 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:07:29.040334 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:07:29.323678 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:07:29.881689 systemd[1]: Switching root. Jan 20 01:07:30.436486 systemd-journald[202]: Journal stopped Jan 20 01:08:02.321645 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 20 01:08:02.322535 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:08:02.322572 kernel: SELinux: policy capability open_perms=1 Jan 20 01:08:02.322599 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:08:02.322618 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:08:02.322637 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:08:02.322653 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:08:02.322667 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:08:02.322682 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:08:02.322704 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 01:08:02.322720 kernel: audit: type=1403 audit(1768871252.009:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:08:02.322742 systemd[1]: Successfully loaded SELinux policy in 650.174ms. Jan 20 01:08:02.323093 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 98.379ms. Jan 20 01:08:02.323117 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:08:02.323134 systemd[1]: Detected virtualization kvm. Jan 20 01:08:02.323150 systemd[1]: Detected architecture x86-64. Jan 20 01:08:02.323166 systemd[1]: Detected first boot. Jan 20 01:08:02.323182 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:08:02.323198 zram_generator::config[1120]: No configuration found. Jan 20 01:08:02.323229 kernel: Guest personality initialized and is inactive Jan 20 01:08:02.323252 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 01:08:02.323268 kernel: Initialized host personality Jan 20 01:08:02.323283 kernel: NET: Registered PF_VSOCK protocol family Jan 20 01:08:02.351572 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:08:02.359246 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 01:08:02.359272 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:08:02.359630 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:08:02.359657 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:08:02.359684 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:08:02.359706 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:08:02.359721 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:08:02.359735 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:08:02.360189 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:08:02.360212 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:08:02.360228 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:08:02.360244 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:08:02.360262 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:08:02.360279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:08:02.365124 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:08:02.365151 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:08:02.365170 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:08:02.365188 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:08:02.365204 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 01:08:02.365221 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:08:02.365238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:08:02.365265 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:08:02.365282 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:08:02.365475 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:08:02.365499 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:08:02.365518 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:08:02.365538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:08:02.365558 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:08:02.365577 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:08:02.365598 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:08:02.365618 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:08:02.365643 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 01:08:02.365659 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:08:02.365677 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:08:02.365702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:08:02.365718 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:08:02.365734 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:08:02.366041 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:08:02.366068 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:08:02.366086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:08:02.366113 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:08:02.366132 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:08:02.366152 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:08:02.366172 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:08:02.366191 systemd[1]: Reached target machines.target - Containers. Jan 20 01:08:02.366209 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:08:02.366229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:08:02.366248 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:08:02.366269 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:08:02.366286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:08:02.366472 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:08:02.366489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:08:02.366505 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:08:02.366520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:08:02.366536 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:08:02.366557 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:08:02.366578 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:08:02.366594 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:08:02.366609 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:08:02.366626 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:08:02.366643 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:08:02.366660 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:08:02.366680 kernel: ACPI: bus type drm_connector registered Jan 20 01:08:02.366698 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:08:02.366714 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:08:02.366734 kernel: loop: module loaded Jan 20 01:08:02.367026 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 01:08:02.367050 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:08:02.367069 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:08:02.367087 systemd[1]: Stopped verity-setup.service. Jan 20 01:08:02.367183 systemd-journald[1206]: Collecting audit messages is disabled. Jan 20 01:08:02.367218 systemd-journald[1206]: Journal started Jan 20 01:08:02.367253 systemd-journald[1206]: Runtime Journal (/run/log/journal/ef4234d4dba84815aec80d78501cc320) is 6M, max 48.3M, 42.2M free. Jan 20 01:07:47.913501 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:07:48.150709 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 01:07:48.165579 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:07:48.183088 systemd[1]: systemd-journald.service: Consumed 5.121s CPU time. Jan 20 01:08:02.679155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:08:02.865233 kernel: fuse: init (API version 7.41) Jan 20 01:08:02.878470 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:08:03.169696 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:08:03.310259 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:08:03.440249 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:08:03.554574 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:08:03.658297 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:08:03.811294 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:08:03.905602 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:08:04.016687 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:08:04.146268 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:08:04.173601 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:08:04.275250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:08:04.276528 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:08:04.380600 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:08:04.388296 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:08:04.521550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:08:04.522473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:08:04.696049 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:08:04.701542 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:08:04.869056 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:08:04.879663 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:08:04.953287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:08:05.098661 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:08:05.199270 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:08:05.282709 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 01:08:05.425214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:08:05.923733 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:08:06.044167 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:08:06.250158 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:08:06.346181 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:08:06.346555 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:08:06.522161 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 01:08:06.794117 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:08:06.874196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:08:06.907718 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:08:07.027665 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:08:07.137101 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:08:07.173247 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:08:07.244134 systemd-journald[1206]: Time spent on flushing to /var/log/journal/ef4234d4dba84815aec80d78501cc320 is 1.194980s for 975 entries. Jan 20 01:08:07.244134 systemd-journald[1206]: System Journal (/var/log/journal/ef4234d4dba84815aec80d78501cc320) is 8M, max 195.6M, 187.6M free. Jan 20 01:08:08.617252 systemd-journald[1206]: Received client request to flush runtime journal. Jan 20 01:08:07.336653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:08:07.398214 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:08:07.505586 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:08:07.899607 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:08:08.322269 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:08:08.419087 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:08:08.902261 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:08:08.955165 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:08:09.041276 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:08:09.227941 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 01:08:09.646992 kernel: loop0: detected capacity change from 0 to 128560 Jan 20 01:08:10.025282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:08:10.345258 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:08:10.365524 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 01:08:10.723173 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:08:10.879149 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:08:10.944051 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:08:11.394029 kernel: loop1: detected capacity change from 0 to 224512 Jan 20 01:08:12.181980 kernel: loop2: detected capacity change from 0 to 110984 Jan 20 01:08:12.479331 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 20 01:08:12.479357 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 20 01:08:13.036228 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:08:13.557636 kernel: loop3: detected capacity change from 0 to 128560 Jan 20 01:08:14.145141 kernel: loop4: detected capacity change from 0 to 224512 Jan 20 01:08:15.374361 kernel: loop5: detected capacity change from 0 to 110984 Jan 20 01:08:17.385221 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 01:08:17.651384 (sd-merge)[1265]: Merged extensions into '/usr'. Jan 20 01:08:18.731287 systemd[1]: Reload requested from client PID 1241 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:08:18.739112 systemd[1]: Reloading... Jan 20 01:08:22.038015 zram_generator::config[1291]: No configuration found. Jan 20 01:08:27.054335 systemd[1]: Reloading finished in 8223 ms. Jan 20 01:08:28.097348 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:08:28.738430 systemd[1]: Starting ensure-sysext.service... Jan 20 01:08:28.833055 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:08:29.340146 systemd[1]: Reload requested from client PID 1328 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:08:29.341148 systemd[1]: Reloading... Jan 20 01:08:29.814050 ldconfig[1236]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:08:30.334413 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:08:30.335644 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:08:30.341460 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:08:30.344247 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:08:30.426002 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:08:30.427416 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jan 20 01:08:30.431675 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jan 20 01:08:30.560745 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:08:30.599360 systemd-tmpfiles[1329]: Skipping /boot Jan 20 01:08:30.644695 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:08:30.645143 systemd-tmpfiles[1329]: Skipping /boot Jan 20 01:08:31.048712 zram_generator::config[1359]: No configuration found. Jan 20 01:08:38.485951 systemd[1]: Reloading finished in 9137 ms. Jan 20 01:08:38.599210 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:08:38.683371 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:08:38.903551 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:08:39.381098 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:08:39.507505 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:08:39.691387 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:08:39.834069 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:08:40.056190 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:08:40.403457 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:08:40.614443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:08:40.623451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:08:40.707588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:08:40.852361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:08:41.057500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:08:41.142071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:08:41.142282 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:08:41.142421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:08:41.157452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:08:41.170013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:08:41.291384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:08:41.297415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:08:41.412159 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:08:41.624287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:08:41.633383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:08:41.655370 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:08:41.853099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:08:41.916591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:08:41.931231 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:08:41.946289 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:08:41.997546 systemd-udevd[1401]: Using default interface naming scheme 'v255'. Jan 20 01:08:42.033599 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:08:42.121351 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:08:42.310443 augenrules[1433]: No rules Jan 20 01:08:42.382032 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:08:42.395138 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:08:42.473575 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:08:42.570402 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:08:42.659247 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:08:42.803147 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:08:42.813037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:08:42.957166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:08:42.957527 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:08:43.075525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:08:43.080501 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:08:43.230080 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:08:43.942491 systemd[1]: Finished ensure-sysext.service. Jan 20 01:08:44.066406 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:08:44.104225 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:08:44.200008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:08:44.219416 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:08:44.483069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:08:44.790053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:08:44.920527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:08:45.021421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:08:45.021494 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:08:45.044493 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:08:45.276526 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 01:08:45.373587 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:08:45.376142 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:08:45.383196 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:08:45.476259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:08:45.491093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:08:45.647337 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:08:45.654163 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:08:46.008479 augenrules[1472]: /sbin/augenrules: No change Jan 20 01:08:46.036221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:08:46.043400 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:08:46.203509 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:08:46.208533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:08:46.583117 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:08:46.583280 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:08:46.652491 augenrules[1503]: No rules Jan 20 01:08:46.651332 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:08:46.652237 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:08:47.108246 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 01:08:47.866306 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:08:48.641946 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:08:48.680442 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 01:08:48.864099 systemd-resolved[1400]: Positive Trust Anchors: Jan 20 01:08:48.864232 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:08:48.864272 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:08:48.864615 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:08:49.120438 systemd-resolved[1400]: Defaulting to hostname 'linux'. Jan 20 01:08:49.235002 kernel: ACPI: button: Power Button [PWRF] Jan 20 01:08:49.167278 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:08:49.314629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:08:49.385399 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 01:08:49.469098 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:08:49.579261 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:08:49.583329 systemd-networkd[1485]: lo: Link UP Jan 20 01:08:49.583340 systemd-networkd[1485]: lo: Gained carrier Jan 20 01:08:49.617472 systemd-networkd[1485]: Enumeration completed Jan 20 01:08:49.657616 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:08:49.657626 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:08:49.681609 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:08:49.738256 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:08:49.738330 systemd-networkd[1485]: eth0: Link UP Jan 20 01:08:49.739650 systemd-networkd[1485]: eth0: Gained carrier Jan 20 01:08:49.739677 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:08:49.784479 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 01:08:49.917606 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:08:50.131133 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:08:50.131194 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:08:50.230330 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:08:50.373634 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:08:50.577340 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:08:50.589162 systemd-networkd[1485]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:08:50.622639 systemd-timesyncd[1489]: Network configuration changed, trying to establish connection. Jan 20 01:08:50.701214 systemd-timesyncd[1489]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 01:08:50.701575 systemd-timesyncd[1489]: Initial clock synchronization to Tue 2026-01-20 01:08:50.883704 UTC. Jan 20 01:08:50.744548 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:08:50.894373 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:08:51.047404 systemd-networkd[1485]: eth0: Gained IPv6LL Jan 20 01:08:51.104265 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:08:51.317327 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 01:08:52.222210 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 01:08:52.237525 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 01:08:54.289661 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:08:54.480528 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 01:08:54.769438 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 01:08:54.809563 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 01:08:55.287060 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:08:55.474475 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:08:55.520746 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:08:56.743554 systemd[1]: Reached target network.target - Network. Jan 20 01:08:56.910316 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:08:57.082517 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:08:57.239623 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:08:57.240136 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:08:57.337723 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:08:57.492265 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:08:57.627175 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:08:57.788419 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:08:58.034547 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:08:58.134518 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:08:58.251467 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 01:08:58.655006 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:08:58.826407 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:08:59.321357 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:08:59.735699 jq[1546]: false Jan 20 01:08:59.958750 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:09:00.267131 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:09:00.476660 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 01:09:01.040426 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:09:01.233026 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:09:01.252544 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:09:01.338303 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:09:01.775209 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:09:01.980180 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:09:02.196589 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:09:02.272307 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:09:02.393619 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:09:02.821157 extend-filesystems[1547]: Found /dev/vda6 Jan 20 01:09:02.403520 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:09:03.114129 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:09:03.328947 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:09:03.404386 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:09:03.497147 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 01:09:03.558320 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing passwd entry cache Jan 20 01:09:03.614357 extend-filesystems[1547]: Found /dev/vda9 Jan 20 01:09:03.582079 oslogin_cache_refresh[1548]: Refreshing passwd entry cache Jan 20 01:09:03.694373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:09:03.868478 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:09:04.317261 extend-filesystems[1547]: Checking size of /dev/vda9 Jan 20 01:09:04.377714 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting users, quitting Jan 20 01:09:04.377714 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:09:04.377714 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing group entry cache Jan 20 01:09:04.324340 oslogin_cache_refresh[1548]: Failure getting users, quitting Jan 20 01:09:04.324381 oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:09:04.324500 oslogin_cache_refresh[1548]: Refreshing group entry cache Jan 20 01:09:04.406641 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting groups, quitting Jan 20 01:09:04.406641 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:09:04.402725 oslogin_cache_refresh[1548]: Failure getting groups, quitting Jan 20 01:09:04.403024 oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:09:04.729710 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 01:09:04.741116 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 01:09:05.877505 jq[1564]: true Jan 20 01:09:05.958567 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:09:05.972184 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:09:07.046472 tar[1567]: linux-amd64/LICENSE Jan 20 01:09:07.046472 tar[1567]: linux-amd64/helm Jan 20 01:09:07.730013 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 01:09:07.970677 jq[1595]: true Jan 20 01:09:07.900396 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:09:07.995564 extend-filesystems[1547]: Resized partition /dev/vda9 Jan 20 01:09:09.222586 extend-filesystems[1606]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 01:09:09.215367 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:09:09.653182 update_engine[1560]: I20260120 01:09:09.601349 1560 main.cc:92] Flatcar Update Engine starting Jan 20 01:09:10.110739 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 01:09:11.184235 dbus-daemon[1544]: [system] SELinux support is enabled Jan 20 01:09:11.706670 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:09:11.861390 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:09:11.861698 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:09:11.926128 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:09:11.926172 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:09:12.623254 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:09:12.882358 update_engine[1560]: I20260120 01:09:12.873627 1560 update_check_scheduler.cc:74] Next update check in 9m29s Jan 20 01:09:12.877665 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:09:13.204216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:09:13.482556 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 01:09:13.560267 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 01:09:13.752362 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:09:14.318349 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 01:09:14.400616 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:09:14.416351 systemd-logind[1554]: New seat seat0. Jan 20 01:09:14.469449 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 01:09:14.469449 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 01:09:14.469449 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 01:09:14.907289 bash[1627]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:09:14.911237 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:09:14.911542 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Jan 20 01:09:14.480426 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:09:14.491543 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:09:14.605474 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:09:14.704026 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:09:14.705012 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:09:14.897167 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:09:16.324186 systemd-logind[1554]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 01:09:16.334035 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:09:16.378392 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:09:16.406408 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:40976.service - OpenSSH per-connection server daemon (10.0.0.1:40976). Jan 20 01:09:21.242537 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2441462168 wd_nsec: 2441461026 Jan 20 01:09:22.949117 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:09:22.950218 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:09:22.953081 systemd[1]: issuegen.service: Consumed 1.044s CPU time, 1.6M memory peak. Jan 20 01:09:23.422052 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:09:35.810532 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:09:46.164676 containerd[1591]: time="2026-01-20T01:09:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 01:09:46.342418 containerd[1591]: time="2026-01-20T01:09:46.337674569Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 01:09:47.600475 sshd[1644]: Access denied for user core by PAM account configuration [preauth] Jan 20 01:09:48.181650 containerd[1591]: time="2026-01-20T01:09:48.177626852Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="524.68µs" Jan 20 01:09:48.191247 containerd[1591]: time="2026-01-20T01:09:48.190708178Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 01:09:48.193238 containerd[1591]: time="2026-01-20T01:09:48.191494810Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 01:09:48.193238 containerd[1591]: time="2026-01-20T01:09:48.192457778Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 01:09:48.193238 containerd[1591]: time="2026-01-20T01:09:48.192494704Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 01:09:48.193238 containerd[1591]: time="2026-01-20T01:09:48.192541609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:09:48.193238 containerd[1591]: time="2026-01-20T01:09:48.192639935Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:09:48.193238 containerd[1591]: time="2026-01-20T01:09:48.192668496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:09:48.212236 containerd[1591]: time="2026-01-20T01:09:48.199710444Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:09:48.212236 containerd[1591]: time="2026-01-20T01:09:48.211228645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:09:48.212236 containerd[1591]: time="2026-01-20T01:09:48.211282111Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:09:48.212236 containerd[1591]: time="2026-01-20T01:09:48.211297719Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 01:09:48.212236 containerd[1591]: time="2026-01-20T01:09:48.211512134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 01:09:48.230384 containerd[1591]: time="2026-01-20T01:09:48.230325958Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:09:48.236636 containerd[1591]: time="2026-01-20T01:09:48.234476206Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:09:48.236636 containerd[1591]: time="2026-01-20T01:09:48.234511940Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 01:09:48.236636 containerd[1591]: time="2026-01-20T01:09:48.234726786Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 01:09:48.247641 containerd[1591]: time="2026-01-20T01:09:48.247592024Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 01:09:48.249222 containerd[1591]: time="2026-01-20T01:09:48.248355063Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:09:48.612204 containerd[1591]: time="2026-01-20T01:09:48.610364329Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 01:09:48.622306 containerd[1591]: time="2026-01-20T01:09:48.622252384Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 01:09:48.622662 containerd[1591]: time="2026-01-20T01:09:48.622632324Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 01:09:48.623221 containerd[1591]: time="2026-01-20T01:09:48.623043894Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 01:09:48.629700 containerd[1591]: time="2026-01-20T01:09:48.629661510Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 01:09:48.630294 containerd[1591]: time="2026-01-20T01:09:48.630270863Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 01:09:48.631302 containerd[1591]: time="2026-01-20T01:09:48.631271810Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 01:09:48.631414 containerd[1591]: time="2026-01-20T01:09:48.631393928Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 01:09:48.632217 containerd[1591]: time="2026-01-20T01:09:48.632186371Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 01:09:48.632323 containerd[1591]: time="2026-01-20T01:09:48.632303632Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 01:09:48.632402 containerd[1591]: time="2026-01-20T01:09:48.632384497Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 01:09:48.632631 containerd[1591]: time="2026-01-20T01:09:48.632609100Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 01:09:48.639237 containerd[1591]: time="2026-01-20T01:09:48.638744521Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 01:09:48.639347 containerd[1591]: time="2026-01-20T01:09:48.639327286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 01:09:48.639537 containerd[1591]: time="2026-01-20T01:09:48.639509884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 01:09:48.640553 containerd[1591]: time="2026-01-20T01:09:48.640521841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 01:09:48.641530 containerd[1591]: time="2026-01-20T01:09:48.641499275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 01:09:48.642382 containerd[1591]: time="2026-01-20T01:09:48.641599945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 01:09:48.650473 containerd[1591]: time="2026-01-20T01:09:48.650435121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 01:09:48.650596 containerd[1591]: time="2026-01-20T01:09:48.650575804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 01:09:48.651232 containerd[1591]: time="2026-01-20T01:09:48.651203429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 01:09:48.653561 containerd[1591]: time="2026-01-20T01:09:48.653533432Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 01:09:48.653646 containerd[1591]: time="2026-01-20T01:09:48.653628242Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 01:09:48.661435 containerd[1591]: time="2026-01-20T01:09:48.654605625Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 01:09:48.661574 containerd[1591]: time="2026-01-20T01:09:48.661551631Z" level=info msg="Start snapshots syncer" Jan 20 01:09:48.663030 containerd[1591]: time="2026-01-20T01:09:48.662717654Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 01:09:48.686360 containerd[1591]: time="2026-01-20T01:09:48.680516366Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.692580597Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.693568259Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.697631131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.697681662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.697699944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.697714971Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.697731401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.697745245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.702219375Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.702270156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.702291305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 01:09:48.706406 containerd[1591]: time="2026-01-20T01:09:48.702308745Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 01:09:48.718471 containerd[1591]: time="2026-01-20T01:09:48.718415681Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:09:48.718610 containerd[1591]: time="2026-01-20T01:09:48.718583152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:09:48.718699 containerd[1591]: time="2026-01-20T01:09:48.718678383Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:09:48.735486 containerd[1591]: time="2026-01-20T01:09:48.735430457Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:09:48.735634 containerd[1591]: time="2026-01-20T01:09:48.735610330Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 01:09:48.735721 containerd[1591]: time="2026-01-20T01:09:48.735696494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 01:09:48.771021 containerd[1591]: time="2026-01-20T01:09:48.742506616Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 01:09:48.771021 containerd[1591]: time="2026-01-20T01:09:48.743309376Z" level=info msg="runtime interface created" Jan 20 01:09:48.771021 containerd[1591]: time="2026-01-20T01:09:48.743325255Z" level=info msg="created NRI interface" Jan 20 01:09:48.771021 containerd[1591]: time="2026-01-20T01:09:48.743339882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 01:09:48.771021 containerd[1591]: time="2026-01-20T01:09:48.743365236Z" level=info msg="Connect containerd service" Jan 20 01:09:48.771021 containerd[1591]: time="2026-01-20T01:09:48.743410438Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:09:48.794547 containerd[1591]: time="2026-01-20T01:09:48.794479919Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:09:49.893690 systemd-udevd[1401]: cpu0: Worker [1470] processing SEQNUM=1728 is taking a long time Jan 20 01:09:49.939495 systemd-udevd[1401]: cpu2: Worker [1457] processing SEQNUM=1730 is taking a long time Jan 20 01:09:49.939510 systemd-udevd[1401]: cpu1: Worker [1459] processing SEQNUM=1729 is taking a long time Jan 20 01:09:51.013711 systemd-udevd[1401]: cpu3: Worker [1449] processing SEQNUM=1731 is taking a long time Jan 20 01:09:58.434434 update_engine[1560]: I20260120 01:09:58.430075 1560 update_attempter.cc:509] Updating boot flags... Jan 20 01:09:59.506393 kernel: kvm_amd: TSC scaling supported Jan 20 01:09:59.519193 kernel: kvm_amd: Nested Virtualization enabled Jan 20 01:09:59.519374 kernel: kvm_amd: Nested Paging enabled Jan 20 01:09:59.542156 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 01:09:59.542280 kernel: kvm_amd: PMU virtualization is disabled Jan 20 01:10:03.575037 systemd[1]: sshd@0-10.0.0.13:22-10.0.0.1:40976.service: Deactivated successfully. Jan 20 01:10:04.003068 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.518978924Z" level=info msg="Start subscribing containerd event" Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.526508174Z" level=info msg="Start recovering state" Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.527059903Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.529893335Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.533272845Z" level=info msg="Start event monitor" Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.533451011Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.533476507Z" level=info msg="Start streaming server" Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.533573575Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.533699386Z" level=info msg="runtime interface starting up..." Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.533929366Z" level=info msg="starting plugins..." Jan 20 01:10:11.562272 containerd[1591]: time="2026-01-20T01:10:11.538637897Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 01:10:11.651090 containerd[1591]: time="2026-01-20T01:10:11.650611300Z" level=info msg="containerd successfully booted in 25.483092s" Jan 20 01:10:12.857060 tar[1567]: linux-amd64/README.md Jan 20 01:10:16.189915 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:10:23.708334 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:10:24.237320 kernel: clocksource: timekeeping watchdog on CPU2: kvm-clock wd-wd read-back delay of 4362873ns Jan 20 01:10:24.240173 kernel: clocksource: wd-tsc-wd read-back delay of 1129038ns, clock-skew test skipped! Jan 20 01:10:26.351237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:10:26.623997 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:10:26.681717 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 01:10:26.719496 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:10:26.994632 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:45210.service - OpenSSH per-connection server daemon (10.0.0.1:45210). Jan 20 01:10:28.261915 kernel: EDAC MC: Ver: 3.0.0 Jan 20 01:10:28.627608 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 45210 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:10:28.710350 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:10:28.830022 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:10:28.876004 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:10:29.180123 systemd-logind[1554]: New session 1 of user core. Jan 20 01:10:29.678206 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:10:29.777043 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:10:30.457435 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:10:30.579419 systemd-logind[1554]: New session c1 of user core. Jan 20 01:10:32.772339 systemd[1711]: Queued start job for default target default.target. Jan 20 01:10:32.814899 systemd[1711]: Created slice app.slice - User Application Slice. Jan 20 01:10:32.814953 systemd[1711]: Reached target paths.target - Paths. Jan 20 01:10:32.817694 systemd[1711]: Reached target timers.target - Timers. Jan 20 01:10:32.821649 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:10:33.366254 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:10:33.366546 systemd[1711]: Reached target sockets.target - Sockets. Jan 20 01:10:33.366613 systemd[1711]: Reached target basic.target - Basic System. Jan 20 01:10:33.366687 systemd[1711]: Reached target default.target - Main User Target. Jan 20 01:10:33.366742 systemd[1711]: Startup finished in 2.696s. Jan 20 01:10:33.367021 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:10:33.421720 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:10:33.742187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:10:33.746232 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:10:33.880594 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:56570.service - OpenSSH per-connection server daemon (10.0.0.1:56570). Jan 20 01:10:33.911213 systemd[1]: Startup finished in 37.125s (kernel) + 1min 57.123s (initrd) + 3min 2.514s (userspace) = 5min 36.763s. Jan 20 01:10:33.973885 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:10:34.845593 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 56570 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:10:34.861317 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:10:35.095191 systemd-logind[1554]: New session 2 of user core. Jan 20 01:10:35.121621 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:10:35.664971 sshd[1735]: Connection closed by 10.0.0.1 port 56570 Jan 20 01:10:35.730244 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jan 20 01:10:35.816534 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:56314.service - OpenSSH per-connection server daemon (10.0.0.1:56314). Jan 20 01:10:35.818389 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:56570.service: Deactivated successfully. Jan 20 01:10:35.860034 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 01:10:35.895028 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. Jan 20 01:10:35.927450 systemd-logind[1554]: Removed session 2. Jan 20 01:10:36.386388 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 56314 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:10:36.407993 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:10:36.496366 systemd-logind[1554]: New session 3 of user core. Jan 20 01:10:36.512071 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:10:36.710236 sshd[1750]: Connection closed by 10.0.0.1 port 56314 Jan 20 01:10:36.717182 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 20 01:10:36.873389 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:56314.service: Deactivated successfully. Jan 20 01:10:36.888565 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 01:10:36.899404 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. Jan 20 01:10:36.944120 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:56316.service - OpenSSH per-connection server daemon (10.0.0.1:56316). Jan 20 01:10:36.970545 systemd-logind[1554]: Removed session 3. Jan 20 01:10:37.360174 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 56316 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:10:37.383513 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:10:37.587954 systemd-logind[1554]: New session 4 of user core. Jan 20 01:10:37.605089 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:10:38.115482 sshd[1759]: Connection closed by 10.0.0.1 port 56316 Jan 20 01:10:38.122721 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jan 20 01:10:38.228575 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:56316.service: Deactivated successfully. Jan 20 01:10:38.572479 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:10:38.696372 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:10:38.820234 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:56332.service - OpenSSH per-connection server daemon (10.0.0.1:56332). Jan 20 01:10:38.871235 systemd-logind[1554]: Removed session 4. Jan 20 01:10:39.720740 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 56332 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:10:39.746446 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:10:39.835368 systemd-logind[1554]: New session 5 of user core. Jan 20 01:10:39.880361 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:10:40.552478 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:10:40.553421 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:10:46.855382 kubelet[1730]: E0120 01:10:46.782658 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:10:46.875645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:10:46.876915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:10:46.884035 systemd[1]: kubelet.service: Consumed 11.768s CPU time, 269.2M memory peak. Jan 20 01:10:57.196532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:10:57.296645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:10:59.919050 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:11:00.189274 (dockerd)[1797]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:11:15.096661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:11:15.261177 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:11:17.824153 kubelet[1808]: E0120 01:11:17.821708 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:11:17.901313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:11:17.903496 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:11:17.921457 systemd[1]: kubelet.service: Consumed 5.012s CPU time, 110.7M memory peak. Jan 20 01:11:18.592555 dockerd[1797]: time="2026-01-20T01:11:18.583204956Z" level=info msg="Starting up" Jan 20 01:11:18.711563 dockerd[1797]: time="2026-01-20T01:11:18.708064609Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 01:11:19.295543 dockerd[1797]: time="2026-01-20T01:11:19.288212767Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 01:11:20.066895 systemd[1]: var-lib-docker-metacopy\x2dcheck689188127-merged.mount: Deactivated successfully. Jan 20 01:11:21.020533 dockerd[1797]: time="2026-01-20T01:11:21.006568279Z" level=info msg="Loading containers: start." Jan 20 01:11:21.259335 kernel: Initializing XFRM netlink socket Jan 20 01:11:27.971370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:11:28.005340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:11:31.294125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:11:31.395216 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:11:32.721608 systemd-networkd[1485]: docker0: Link UP Jan 20 01:11:32.830138 dockerd[1797]: time="2026-01-20T01:11:32.827508458Z" level=info msg="Loading containers: done." Jan 20 01:11:33.587224 kubelet[1975]: E0120 01:11:33.584568 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:11:33.663409 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3637993525-merged.mount: Deactivated successfully. Jan 20 01:11:33.667286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:11:33.674672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:11:33.675573 systemd[1]: kubelet.service: Consumed 1.463s CPU time, 110.5M memory peak. Jan 20 01:11:33.693217 dockerd[1797]: time="2026-01-20T01:11:33.691499117Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:11:33.697720 dockerd[1797]: time="2026-01-20T01:11:33.696283175Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 01:11:33.697720 dockerd[1797]: time="2026-01-20T01:11:33.696630535Z" level=info msg="Initializing buildkit" Jan 20 01:11:34.608469 dockerd[1797]: time="2026-01-20T01:11:34.606585441Z" level=info msg="Completed buildkit initialization" Jan 20 01:11:34.779350 dockerd[1797]: time="2026-01-20T01:11:34.779188672Z" level=info msg="Daemon has completed initialization" Jan 20 01:11:34.789079 dockerd[1797]: time="2026-01-20T01:11:34.783415264Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:11:34.813273 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:11:43.890435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:11:43.962347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:11:50.468598 containerd[1591]: time="2026-01-20T01:11:50.459360832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 01:11:51.207514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:11:51.534618 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:11:57.194446 kubelet[2056]: E0120 01:11:57.192892 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:11:57.288295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:11:57.299489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:11:57.327648 systemd[1]: kubelet.service: Consumed 3.147s CPU time, 110.9M memory peak. Jan 20 01:11:57.895521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75358357.mount: Deactivated successfully. Jan 20 01:12:07.493938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:12:07.580032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:12:09.962127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:12:10.034248 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:12:11.847429 kubelet[2128]: E0120 01:12:11.846502 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:12:11.973424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:12:11.974287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:12:12.023034 systemd[1]: kubelet.service: Consumed 1.330s CPU time, 114.5M memory peak. Jan 20 01:12:22.235513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 01:12:22.278656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:12:28.308557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:12:29.983226 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:12:36.666937 kubelet[2149]: E0120 01:12:36.663536 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:12:36.701674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:12:36.709941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:12:36.775514 systemd[1]: kubelet.service: Consumed 2.549s CPU time, 109.8M memory peak. Jan 20 01:12:46.324623 containerd[1591]: time="2026-01-20T01:12:46.320642842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:12:46.347373 containerd[1591]: time="2026-01-20T01:12:46.345418312Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 20 01:12:46.378385 containerd[1591]: time="2026-01-20T01:12:46.374492713Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:12:46.447334 containerd[1591]: time="2026-01-20T01:12:46.425735653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:12:46.474855 containerd[1591]: time="2026-01-20T01:12:46.468175980Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 55.990199235s" Jan 20 01:12:46.474855 containerd[1591]: time="2026-01-20T01:12:46.470139225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 01:12:46.561995 containerd[1591]: time="2026-01-20T01:12:46.561349056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 01:12:46.893557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 01:12:46.965648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:12:53.871570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:12:55.013675 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:13:05.839384 kubelet[2165]: E0120 01:13:05.835001 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:13:05.916208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:13:05.923338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:13:06.000617 systemd[1]: kubelet.service: Consumed 3.879s CPU time, 110.5M memory peak. Jan 20 01:13:16.317472 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 01:13:16.462513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:13:25.312357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:13:25.889005 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:13:37.073174 kubelet[2187]: E0120 01:13:37.055303 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:13:37.159326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:13:37.172280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:13:37.186227 systemd[1]: kubelet.service: Consumed 5.371s CPU time, 110.2M memory peak. Jan 20 01:13:43.320253 containerd[1591]: time="2026-01-20T01:13:43.317573078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:13:43.457564 containerd[1591]: time="2026-01-20T01:13:43.410579968Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 20 01:13:43.594014 containerd[1591]: time="2026-01-20T01:13:43.588677518Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:13:43.630445 containerd[1591]: time="2026-01-20T01:13:43.625540098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:13:43.636051 containerd[1591]: time="2026-01-20T01:13:43.629652669Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 57.068234546s" Jan 20 01:13:43.636051 containerd[1591]: time="2026-01-20T01:13:43.632748798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 01:13:43.672721 containerd[1591]: time="2026-01-20T01:13:43.666530658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 01:13:47.383200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 01:13:47.487006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:13:53.362359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:13:53.483162 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:13:58.018049 kubelet[2209]: E0120 01:13:58.017469 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:13:58.086105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:13:58.098480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:13:58.158668 systemd[1]: kubelet.service: Consumed 2.835s CPU time, 108.6M memory peak. Jan 20 01:14:09.599396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 01:14:09.643364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:14:23.923164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:14:25.035714 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:14:28.023538 containerd[1591]: time="2026-01-20T01:14:28.020469142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:14:28.045478 containerd[1591]: time="2026-01-20T01:14:28.041170879Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 20 01:14:28.071606 containerd[1591]: time="2026-01-20T01:14:28.069544996Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:14:28.129292 containerd[1591]: time="2026-01-20T01:14:28.115670405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:14:28.129639 containerd[1591]: time="2026-01-20T01:14:28.129588240Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 44.462995508s" Jan 20 01:14:28.162408 containerd[1591]: time="2026-01-20T01:14:28.129750031Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 01:14:28.390036 containerd[1591]: time="2026-01-20T01:14:28.371067795Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 01:14:31.472439 kubelet[2229]: E0120 01:14:31.472358 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:14:31.524714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:14:31.540501 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:14:31.542469 systemd[1]: kubelet.service: Consumed 3.730s CPU time, 112.8M memory peak. Jan 20 01:14:41.673925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 01:14:41.875696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:14:52.342360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:14:52.473574 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:14:55.416285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752589247.mount: Deactivated successfully. Jan 20 01:14:55.568118 kubelet[2251]: E0120 01:14:55.567484 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:14:55.605138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:14:55.610454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:14:55.617243 systemd[1]: kubelet.service: Consumed 4.808s CPU time, 110.7M memory peak. Jan 20 01:15:05.662171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 01:15:05.709364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:15:12.180662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:15:12.579354 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:15:20.514137 kubelet[2271]: E0120 01:15:20.504489 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:15:20.519649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:15:20.520283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:15:20.521736 systemd[1]: kubelet.service: Consumed 3.407s CPU time, 109.8M memory peak. Jan 20 01:15:23.179964 containerd[1591]: time="2026-01-20T01:15:23.176458563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:15:23.204474 containerd[1591]: time="2026-01-20T01:15:23.202452874Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 20 01:15:23.221041 containerd[1591]: time="2026-01-20T01:15:23.220144792Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:15:23.270748 containerd[1591]: time="2026-01-20T01:15:23.269330480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:15:23.275131 containerd[1591]: time="2026-01-20T01:15:23.272179289Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 54.891041846s" Jan 20 01:15:23.275131 containerd[1591]: time="2026-01-20T01:15:23.272284133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 01:15:23.306077 containerd[1591]: time="2026-01-20T01:15:23.304741728Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 01:15:28.620262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859610689.mount: Deactivated successfully. Jan 20 01:15:30.653176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 01:15:30.705717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:15:36.421711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:15:36.793370 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:15:41.982003 kubelet[2298]: E0120 01:15:41.980536 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:15:42.068648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:15:42.069171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:15:42.089502 systemd[1]: kubelet.service: Consumed 2.517s CPU time, 110.8M memory peak. Jan 20 01:15:52.175189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 01:15:52.236656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:16:07.253597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:16:07.445731 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:16:13.928334 kubelet[2359]: E0120 01:16:13.920447 2359 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:16:13.965616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:16:13.973717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:16:13.974720 systemd[1]: kubelet.service: Consumed 3.450s CPU time, 110.8M memory peak. Jan 20 01:16:15.824947 containerd[1591]: time="2026-01-20T01:16:15.821333112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:16:15.824947 containerd[1591]: time="2026-01-20T01:16:15.823377270Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 20 01:16:15.833585 containerd[1591]: time="2026-01-20T01:16:15.830625090Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:16:15.889738 containerd[1591]: time="2026-01-20T01:16:15.888438970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:16:15.900836 containerd[1591]: time="2026-01-20T01:16:15.897323879Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 52.592299658s" Jan 20 01:16:15.900836 containerd[1591]: time="2026-01-20T01:16:15.897397997Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 01:16:15.918257 containerd[1591]: time="2026-01-20T01:16:15.915319808Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 01:16:18.023947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179958462.mount: Deactivated successfully. Jan 20 01:16:18.078864 containerd[1591]: time="2026-01-20T01:16:18.078051030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:16:18.113617 containerd[1591]: time="2026-01-20T01:16:18.107385470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 01:16:18.122587 containerd[1591]: time="2026-01-20T01:16:18.119628136Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:16:18.155324 containerd[1591]: time="2026-01-20T01:16:18.132361623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:16:18.155324 containerd[1591]: time="2026-01-20T01:16:18.136016436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.220599809s" Jan 20 01:16:18.155324 containerd[1591]: time="2026-01-20T01:16:18.136059225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 01:16:18.166315 containerd[1591]: time="2026-01-20T01:16:18.166183068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 01:16:19.948569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193882759.mount: Deactivated successfully. Jan 20 01:16:24.585723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 20 01:16:24.601286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:16:42.530597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:16:42.720988 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:16:45.481031 kubelet[2422]: E0120 01:16:45.480437 2422 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:16:45.490574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:16:45.491102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:16:45.492501 systemd[1]: kubelet.service: Consumed 2.564s CPU time, 110M memory peak. Jan 20 01:16:55.830427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Jan 20 01:16:56.087345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:17:04.409401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:17:04.493136 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:17:09.859066 kubelet[2453]: E0120 01:17:09.857241 2453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:17:09.883542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:17:09.884258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:17:09.885377 systemd[1]: kubelet.service: Consumed 3.290s CPU time, 112.3M memory peak. Jan 20 01:17:20.226149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Jan 20 01:17:20.677423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:17:24.975203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:17:25.231038 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:17:28.422912 kubelet[2469]: E0120 01:17:28.409300 2469 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:17:28.468374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:17:28.472246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:17:28.480178 systemd[1]: kubelet.service: Consumed 2.067s CPU time, 110.5M memory peak. Jan 20 01:17:35.481688 containerd[1591]: time="2026-01-20T01:17:35.480021477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:17:35.504717 containerd[1591]: time="2026-01-20T01:17:35.504540628Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 20 01:17:35.530030 containerd[1591]: time="2026-01-20T01:17:35.524355440Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:17:35.568052 containerd[1591]: time="2026-01-20T01:17:35.566353311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:17:35.598039 containerd[1591]: time="2026-01-20T01:17:35.593974985Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1m17.427733809s" Jan 20 01:17:35.598039 containerd[1591]: time="2026-01-20T01:17:35.594303814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 01:17:38.673978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Jan 20 01:17:38.721291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:17:42.280327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:17:42.610226 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:17:44.491350 kubelet[2510]: E0120 01:17:44.491035 2510 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:17:44.520279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:17:44.520731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:17:44.521994 systemd[1]: kubelet.service: Consumed 1.633s CPU time, 109.9M memory peak. Jan 20 01:17:54.715403 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Jan 20 01:17:54.978057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:00.170103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:00.274548 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:18:01.423148 kubelet[2529]: E0120 01:18:01.411155 2529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:18:01.716354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:18:01.722022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:18:01.774195 systemd[1]: kubelet.service: Consumed 1.527s CPU time, 110.5M memory peak. Jan 20 01:18:05.956452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:05.956963 systemd[1]: kubelet.service: Consumed 1.527s CPU time, 110.5M memory peak. Jan 20 01:18:05.985534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:06.325016 systemd[1]: Reload requested from client PID 2546 ('systemctl') (unit session-5.scope)... Jan 20 01:18:06.328134 systemd[1]: Reloading... Jan 20 01:18:07.072012 zram_generator::config[2589]: No configuration found. Jan 20 01:18:08.881679 systemd[1]: Reloading finished in 2544 ms. Jan 20 01:18:09.394433 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:18:09.394749 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:18:09.395988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:09.396054 systemd[1]: kubelet.service: Consumed 450ms CPU time, 98.3M memory peak. Jan 20 01:18:09.440693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:11.770493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:11.848005 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:18:13.300538 kubelet[2638]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:18:13.300538 kubelet[2638]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:18:13.300538 kubelet[2638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:18:13.325505 kubelet[2638]: I0120 01:18:13.298951 2638 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:18:16.631906 kubelet[2638]: I0120 01:18:16.631003 2638 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:18:16.631906 kubelet[2638]: I0120 01:18:16.631069 2638 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:18:16.685504 kubelet[2638]: I0120 01:18:16.636208 2638 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:18:17.482648 kubelet[2638]: I0120 01:18:17.469388 2638 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:18:17.482648 kubelet[2638]: E0120 01:18:17.481149 2638 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:17.567023 kubelet[2638]: I0120 01:18:17.563973 2638 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:18:17.657946 kubelet[2638]: I0120 01:18:17.656380 2638 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:18:17.663576 kubelet[2638]: I0120 01:18:17.660736 2638 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:18:17.663576 kubelet[2638]: I0120 01:18:17.661098 2638 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:18:17.663576 kubelet[2638]: I0120 01:18:17.661970 2638 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:18:17.663576 kubelet[2638]: I0120 01:18:17.661992 2638 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:18:17.668669 kubelet[2638]: I0120 01:18:17.663249 2638 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:18:17.892966 kubelet[2638]: I0120 01:18:17.888350 2638 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:18:17.898691 kubelet[2638]: I0120 01:18:17.895730 2638 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:18:17.898691 kubelet[2638]: I0120 01:18:17.896615 2638 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:18:17.898691 kubelet[2638]: I0120 01:18:17.896935 2638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:18:18.082556 kubelet[2638]: W0120 01:18:18.057251 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:18.112581 kubelet[2638]: E0120 01:18:18.108527 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:18.118595 kubelet[2638]: W0120 01:18:18.114127 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:18.118595 kubelet[2638]: E0120 01:18:18.117692 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:18.200953 kubelet[2638]: I0120 01:18:18.155625 2638 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:18:18.229016 kubelet[2638]: I0120 01:18:18.228731 2638 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:18:18.240020 kubelet[2638]: W0120 01:18:18.234068 2638 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:18:18.308170 kubelet[2638]: I0120 01:18:18.304244 2638 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:18:18.308170 kubelet[2638]: I0120 01:18:18.304477 2638 server.go:1287] "Started kubelet" Jan 20 01:18:18.348679 kubelet[2638]: I0120 01:18:18.334944 2638 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:18:18.358000 kubelet[2638]: I0120 01:18:18.353926 2638 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:18:18.516053 kubelet[2638]: I0120 01:18:18.485239 2638 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:18:18.539091 kubelet[2638]: I0120 01:18:18.475176 2638 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:18:18.622745 kubelet[2638]: I0120 01:18:18.609454 2638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:18:18.622745 kubelet[2638]: I0120 01:18:18.610721 2638 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:18:18.710506 kubelet[2638]: I0120 01:18:18.700441 2638 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:18:18.779062 kubelet[2638]: E0120 01:18:18.742219 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:18.822679 kubelet[2638]: E0120 01:18:18.496434 2638 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4b9e9f9b720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,LastTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:18:18.822679 kubelet[2638]: E0120 01:18:18.784973 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Jan 20 01:18:18.822679 kubelet[2638]: I0120 01:18:18.790228 2638 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:18:18.822679 kubelet[2638]: I0120 01:18:18.790442 2638 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:18:18.834498 kubelet[2638]: W0120 01:18:18.834431 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:18.850468 kubelet[2638]: E0120 01:18:18.850422 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:18.874593 kubelet[2638]: E0120 01:18:18.873684 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:18.922962 kubelet[2638]: E0120 01:18:18.922184 2638 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:18:18.984124 kubelet[2638]: I0120 01:18:18.973664 2638 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:18:18.984124 kubelet[2638]: I0120 01:18:18.974164 2638 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:18:18.984124 kubelet[2638]: E0120 01:18:18.980468 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:19.003059 kubelet[2638]: E0120 01:18:19.000134 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Jan 20 01:18:19.018644 kubelet[2638]: I0120 01:18:19.016736 2638 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:18:19.093986 kubelet[2638]: E0120 01:18:19.082606 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:19.187991 kubelet[2638]: E0120 01:18:19.186166 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:19.319609 kubelet[2638]: E0120 01:18:19.305037 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:19.410679 kubelet[2638]: E0120 01:18:19.406484 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:19.414173 kubelet[2638]: E0120 01:18:19.414133 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Jan 20 01:18:19.496379 kubelet[2638]: W0120 01:18:19.482689 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:19.496379 kubelet[2638]: E0120 01:18:19.491690 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:19.519143 kubelet[2638]: E0120 01:18:19.518715 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:19.636055 kubelet[2638]: E0120 01:18:19.628173 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:19.669614 kubelet[2638]: E0120 01:18:19.665498 2638 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:19.669614 kubelet[2638]: W0120 01:18:19.665505 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:19.679415 kubelet[2638]: E0120 01:18:19.675192 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:19.679415 kubelet[2638]: I0120 01:18:19.678472 2638 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:18:19.679415 kubelet[2638]: I0120 01:18:19.678494 2638 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:18:19.679415 kubelet[2638]: I0120 01:18:19.678712 2638 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:18:19.748435 kubelet[2638]: I0120 01:18:19.746043 2638 policy_none.go:49] "None policy: Start" Jan 20 01:18:19.748435 kubelet[2638]: I0120 01:18:19.746084 2638 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:18:19.748435 kubelet[2638]: I0120 01:18:19.746103 2638 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:18:19.761984 kubelet[2638]: E0120 01:18:19.751634 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:19.853451 kubelet[2638]: E0120 01:18:19.852028 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:20.154237 kubelet[2638]: E0120 01:18:20.114950 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:20.154237 kubelet[2638]: W0120 01:18:20.148952 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:20.154237 kubelet[2638]: E0120 01:18:20.149014 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:20.183555 kubelet[2638]: I0120 01:18:20.169601 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:18:20.220563 kubelet[2638]: E0120 01:18:20.217544 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:20.293609 kubelet[2638]: I0120 01:18:20.288716 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:18:20.293609 kubelet[2638]: I0120 01:18:20.289210 2638 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:18:20.297736 kubelet[2638]: I0120 01:18:20.295566 2638 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:18:20.297736 kubelet[2638]: I0120 01:18:20.295726 2638 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:18:20.297736 kubelet[2638]: E0120 01:18:20.296197 2638 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:18:20.736086 kubelet[2638]: E0120 01:18:20.726392 2638 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:18:20.745583 kubelet[2638]: E0120 01:18:20.744565 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:20.798544 kubelet[2638]: W0120 01:18:20.796330 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:20.798544 kubelet[2638]: E0120 01:18:20.796498 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:20.798544 kubelet[2638]: E0120 01:18:20.796968 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Jan 20 01:18:20.853216 kubelet[2638]: E0120 01:18:20.850909 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:20.867602 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:18:20.963537 kubelet[2638]: E0120 01:18:20.958568 2638 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:18:20.963537 kubelet[2638]: E0120 01:18:20.959204 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:20.979524 kubelet[2638]: E0120 01:18:20.969482 2638 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4b9e9f9b720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,LastTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:18:21.057098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:18:21.075978 kubelet[2638]: E0120 01:18:21.075927 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:21.128459 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:18:21.182485 kubelet[2638]: E0120 01:18:21.179128 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:21.208355 kubelet[2638]: I0120 01:18:21.204053 2638 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:18:21.361500 kubelet[2638]: E0120 01:18:21.317099 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:21.389725 kubelet[2638]: E0120 01:18:21.371067 2638 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:18:22.437555 kubelet[2638]: E0120 01:18:21.528354 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:18:22.437555 kubelet[2638]: I0120 01:18:21.574190 2638 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:18:22.530186 kubelet[2638]: E0120 01:18:22.529085 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="3.2s" Jan 20 01:18:22.537043 kubelet[2638]: W0120 01:18:22.531503 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:22.537043 kubelet[2638]: E0120 01:18:22.532470 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:22.537043 kubelet[2638]: I0120 01:18:22.533042 2638 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:18:22.537043 kubelet[2638]: W0120 01:18:22.533136 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:22.537043 kubelet[2638]: E0120 01:18:22.533196 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:22.537043 kubelet[2638]: W0120 01:18:22.533543 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:22.537043 kubelet[2638]: E0120 01:18:22.533588 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:22.537584 kubelet[2638]: I0120 01:18:22.534731 2638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:18:22.543541 kubelet[2638]: W0120 01:18:22.541736 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:22.543541 kubelet[2638]: E0120 01:18:22.542117 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:22.903438 kubelet[2638]: E0120 01:18:22.886469 2638 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:18:23.114462 kubelet[2638]: E0120 01:18:23.113355 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:18:23.588311 kubelet[2638]: I0120 01:18:23.577665 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:23.597141 kubelet[2638]: E0120 01:18:23.596520 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 20 01:18:23.643386 kubelet[2638]: I0120 01:18:23.636188 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:18:23.643386 kubelet[2638]: I0120 01:18:23.642193 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:18:23.643386 kubelet[2638]: I0120 01:18:23.642456 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:18:23.888651 kubelet[2638]: I0120 01:18:23.753201 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:23.903906 kubelet[2638]: I0120 01:18:23.887073 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:23.903906 kubelet[2638]: I0120 01:18:23.903654 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:23.903906 kubelet[2638]: I0120 01:18:23.903704 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:23.903906 kubelet[2638]: I0120 01:18:23.903739 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:18:23.917712 kubelet[2638]: I0120 01:18:23.917558 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:23.926672 kubelet[2638]: E0120 01:18:23.918964 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 20 01:18:23.926672 kubelet[2638]: E0120 01:18:23.926063 2638 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:24.006010 kubelet[2638]: I0120 01:18:24.004536 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:18:24.028109 systemd[1]: Created slice kubepods-burstable-podcb5a5097cb878fec302ec9db4124e0fe.slice - libcontainer container kubepods-burstable-podcb5a5097cb878fec302ec9db4124e0fe.slice. Jan 20 01:18:24.167549 kubelet[2638]: E0120 01:18:24.164624 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:24.179381 kubelet[2638]: E0120 01:18:24.176631 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:24.182384 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 01:18:24.562001 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 01:18:24.618103 containerd[1591]: time="2026-01-20T01:18:24.614715508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cb5a5097cb878fec302ec9db4124e0fe,Namespace:kube-system,Attempt:0,}" Jan 20 01:18:24.716004 kubelet[2638]: E0120 01:18:24.713653 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:24.716004 kubelet[2638]: E0120 01:18:24.715938 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:24.805680 kubelet[2638]: I0120 01:18:24.719139 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:24.805680 kubelet[2638]: E0120 01:18:24.728128 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:24.805680 kubelet[2638]: E0120 01:18:24.804565 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:24.806022 containerd[1591]: time="2026-01-20T01:18:24.725938402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 01:18:24.814070 kubelet[2638]: E0120 01:18:24.813956 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 20 01:18:24.822718 containerd[1591]: time="2026-01-20T01:18:24.822018578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 01:18:25.802467 kubelet[2638]: E0120 01:18:25.797289 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="6.4s" Jan 20 01:18:25.818038 kubelet[2638]: W0120 01:18:25.815045 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:25.818038 kubelet[2638]: I0120 01:18:25.815073 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:25.818038 kubelet[2638]: E0120 01:18:25.815107 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:25.819722 kubelet[2638]: E0120 01:18:25.818394 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 20 01:18:26.218475 containerd[1591]: time="2026-01-20T01:18:26.213026862Z" level=info msg="connecting to shim 99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71" address="unix:///run/containerd/s/a341fb622300a238784eef572cf7f25cfc1f3010a25b8719734374a0de627778" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:18:26.701660 containerd[1591]: time="2026-01-20T01:18:26.658150902Z" level=info msg="connecting to shim 5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84" address="unix:///run/containerd/s/1194de9b12a812bb25e726bcb5ea7a195dff07cea00311d00768f9d9370d81ec" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:18:26.765885 containerd[1591]: time="2026-01-20T01:18:26.764021835Z" level=info msg="connecting to shim 3c03cf2f578d10413476685573727f83d2a442ff9385f7d309cc59a86bd03338" address="unix:///run/containerd/s/27ee043848ba04f2a846d55709ac1468a9682b7e9b9961a953535cf4bd993201" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:18:27.042342 kubelet[2638]: W0120 01:18:27.033596 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:27.073534 kubelet[2638]: W0120 01:18:27.034127 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:27.073534 kubelet[2638]: E0120 01:18:27.068117 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:27.073534 kubelet[2638]: E0120 01:18:27.070963 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:27.582945 kubelet[2638]: I0120 01:18:27.581472 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:27.608107 kubelet[2638]: E0120 01:18:27.601720 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 20 01:18:28.272131 kubelet[2638]: W0120 01:18:28.269677 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:28.272131 kubelet[2638]: E0120 01:18:28.271650 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:28.433304 systemd[1]: Started cri-containerd-3c03cf2f578d10413476685573727f83d2a442ff9385f7d309cc59a86bd03338.scope - libcontainer container 3c03cf2f578d10413476685573727f83d2a442ff9385f7d309cc59a86bd03338. Jan 20 01:18:29.386993 systemd[1]: Started cri-containerd-5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84.scope - libcontainer container 5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84. Jan 20 01:18:29.488403 systemd[1]: Started cri-containerd-99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71.scope - libcontainer container 99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71. Jan 20 01:18:30.482559 containerd[1591]: time="2026-01-20T01:18:30.478732687Z" level=error msg="get state for 3c03cf2f578d10413476685573727f83d2a442ff9385f7d309cc59a86bd03338" error="context deadline exceeded" Jan 20 01:18:30.482559 containerd[1591]: time="2026-01-20T01:18:30.479121118Z" level=warning msg="unknown status" status=0 Jan 20 01:18:31.079675 kubelet[2638]: E0120 01:18:31.017158 2638 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4b9e9f9b720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,LastTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:18:31.103633 kubelet[2638]: I0120 01:18:31.092959 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:31.103633 kubelet[2638]: E0120 01:18:31.098083 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 20 01:18:31.170719 kubelet[2638]: W0120 01:18:31.161672 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:31.170719 kubelet[2638]: E0120 01:18:31.163055 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:31.208116 containerd[1591]: time="2026-01-20T01:18:31.204924900Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:18:32.364659 kubelet[2638]: E0120 01:18:32.364156 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="7s" Jan 20 01:18:32.405001 kubelet[2638]: E0120 01:18:32.402429 2638 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:32.525935 containerd[1591]: time="2026-01-20T01:18:32.525698979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cb5a5097cb878fec302ec9db4124e0fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c03cf2f578d10413476685573727f83d2a442ff9385f7d309cc59a86bd03338\"" Jan 20 01:18:32.558938 kubelet[2638]: E0120 01:18:32.556932 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:32.636697 containerd[1591]: time="2026-01-20T01:18:32.632444837Z" level=info msg="CreateContainer within sandbox \"3c03cf2f578d10413476685573727f83d2a442ff9385f7d309cc59a86bd03338\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:18:32.874133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2137883033.mount: Deactivated successfully. Jan 20 01:18:33.020713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794074563.mount: Deactivated successfully. Jan 20 01:18:33.082646 containerd[1591]: time="2026-01-20T01:18:33.082581326Z" level=info msg="Container 8f14a1b57d5b2140244cbedc9abf468a9b0721755bf562be9dd9941256650b77: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:18:33.103497 containerd[1591]: time="2026-01-20T01:18:33.099029691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71\"" Jan 20 01:18:33.143990 kubelet[2638]: E0120 01:18:33.138401 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:18:33.143990 kubelet[2638]: E0120 01:18:33.137708 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:33.891420 containerd[1591]: time="2026-01-20T01:18:33.887566159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84\"" Jan 20 01:18:34.098462 containerd[1591]: time="2026-01-20T01:18:34.092656289Z" level=info msg="CreateContainer within sandbox \"99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:18:34.150615 kubelet[2638]: E0120 01:18:34.140037 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:34.163968 containerd[1591]: time="2026-01-20T01:18:34.095999504Z" level=info msg="CreateContainer within sandbox \"3c03cf2f578d10413476685573727f83d2a442ff9385f7d309cc59a86bd03338\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f14a1b57d5b2140244cbedc9abf468a9b0721755bf562be9dd9941256650b77\"" Jan 20 01:18:34.173067 containerd[1591]: time="2026-01-20T01:18:34.167281653Z" level=info msg="StartContainer for \"8f14a1b57d5b2140244cbedc9abf468a9b0721755bf562be9dd9941256650b77\"" Jan 20 01:18:34.180387 containerd[1591]: time="2026-01-20T01:18:34.180103281Z" level=info msg="CreateContainer within sandbox \"5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:18:34.203957 containerd[1591]: time="2026-01-20T01:18:34.203723281Z" level=info msg="connecting to shim 8f14a1b57d5b2140244cbedc9abf468a9b0721755bf562be9dd9941256650b77" address="unix:///run/containerd/s/27ee043848ba04f2a846d55709ac1468a9682b7e9b9961a953535cf4bd993201" protocol=ttrpc version=3 Jan 20 01:18:34.705936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141680681.mount: Deactivated successfully. Jan 20 01:18:34.765730 containerd[1591]: time="2026-01-20T01:18:34.765597748Z" level=info msg="Container 1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:18:34.827635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123950940.mount: Deactivated successfully. Jan 20 01:18:34.847245 containerd[1591]: time="2026-01-20T01:18:34.846999604Z" level=info msg="Container 1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:18:34.912552 systemd[1]: Started cri-containerd-8f14a1b57d5b2140244cbedc9abf468a9b0721755bf562be9dd9941256650b77.scope - libcontainer container 8f14a1b57d5b2140244cbedc9abf468a9b0721755bf562be9dd9941256650b77. Jan 20 01:18:34.955910 containerd[1591]: time="2026-01-20T01:18:34.949674043Z" level=info msg="CreateContainer within sandbox \"99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7\"" Jan 20 01:18:34.987557 containerd[1591]: time="2026-01-20T01:18:34.986973721Z" level=info msg="StartContainer for \"1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7\"" Jan 20 01:18:35.028741 containerd[1591]: time="2026-01-20T01:18:35.027347687Z" level=info msg="connecting to shim 1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7" address="unix:///run/containerd/s/a341fb622300a238784eef572cf7f25cfc1f3010a25b8719734374a0de627778" protocol=ttrpc version=3 Jan 20 01:18:35.100006 containerd[1591]: time="2026-01-20T01:18:35.098975076Z" level=info msg="CreateContainer within sandbox \"5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2\"" Jan 20 01:18:35.118400 containerd[1591]: time="2026-01-20T01:18:35.105683747Z" level=info msg="StartContainer for \"1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2\"" Jan 20 01:18:35.118400 containerd[1591]: time="2026-01-20T01:18:35.108109432Z" level=info msg="connecting to shim 1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2" address="unix:///run/containerd/s/1194de9b12a812bb25e726bcb5ea7a195dff07cea00311d00768f9d9370d81ec" protocol=ttrpc version=3 Jan 20 01:18:36.009519 systemd[1]: Started cri-containerd-1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7.scope - libcontainer container 1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7. Jan 20 01:18:36.279572 systemd[1]: Started cri-containerd-1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2.scope - libcontainer container 1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2. Jan 20 01:18:36.960515 kubelet[2638]: W0120 01:18:36.949361 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:36.960515 kubelet[2638]: E0120 01:18:36.949618 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:37.357677 kubelet[2638]: W0120 01:18:37.287367 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:37.357677 kubelet[2638]: E0120 01:18:37.287712 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:37.602091 kubelet[2638]: I0120 01:18:37.600265 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:37.630589 kubelet[2638]: E0120 01:18:37.630461 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 20 01:18:38.537728 containerd[1591]: time="2026-01-20T01:18:38.537661219Z" level=info msg="StartContainer for \"8f14a1b57d5b2140244cbedc9abf468a9b0721755bf562be9dd9941256650b77\" returns successfully" Jan 20 01:18:39.361220 containerd[1591]: time="2026-01-20T01:18:39.360985337Z" level=info msg="StartContainer for \"1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7\" returns successfully" Jan 20 01:18:39.370486 kubelet[2638]: E0120 01:18:39.365398 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:39.370486 kubelet[2638]: E0120 01:18:39.365598 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:39.379595 kubelet[2638]: E0120 01:18:39.379517 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="7s" Jan 20 01:18:39.948540 containerd[1591]: time="2026-01-20T01:18:39.844749611Z" level=info msg="StartContainer for \"1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2\" returns successfully" Jan 20 01:18:40.217546 kubelet[2638]: W0120 01:18:39.895693 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:40.217546 kubelet[2638]: E0120 01:18:39.896074 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:41.572517 kubelet[2638]: W0120 01:18:41.563923 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 20 01:18:41.586986 kubelet[2638]: E0120 01:18:41.575054 2638 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4b9e9f9b720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,LastTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:18:41.586986 kubelet[2638]: E0120 01:18:41.575507 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:18:41.877051 kubelet[2638]: E0120 01:18:41.861735 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:41.877051 kubelet[2638]: E0120 01:18:41.862258 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:41.890227 kubelet[2638]: E0120 01:18:41.878424 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:41.890227 kubelet[2638]: E0120 01:18:41.878617 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:41.890227 kubelet[2638]: E0120 01:18:41.879084 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:42.125037 kubelet[2638]: E0120 01:18:42.124432 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:42.201564 update_engine[1560]: I20260120 01:18:42.200657 1560 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 01:18:42.269727 update_engine[1560]: I20260120 01:18:42.217414 1560 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 01:18:42.269727 update_engine[1560]: I20260120 01:18:42.225317 1560 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 01:18:42.269727 update_engine[1560]: I20260120 01:18:42.255995 1560 omaha_request_params.cc:62] Current group set to stable Jan 20 01:18:42.276046 update_engine[1560]: I20260120 01:18:42.275709 1560 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 01:18:42.282985 update_engine[1560]: I20260120 01:18:42.282935 1560 update_attempter.cc:643] Scheduling an action processor start. Jan 20 01:18:42.283245 update_engine[1560]: I20260120 01:18:42.283210 1560 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:18:42.284491 update_engine[1560]: I20260120 01:18:42.283747 1560 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 01:18:42.284995 update_engine[1560]: I20260120 01:18:42.284965 1560 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:18:42.285083 update_engine[1560]: I20260120 01:18:42.285061 1560 omaha_request_action.cc:272] Request: Jan 20 01:18:42.285083 update_engine[1560]: Jan 20 01:18:42.285083 update_engine[1560]: Jan 20 01:18:42.285083 update_engine[1560]: Jan 20 01:18:42.285083 update_engine[1560]: Jan 20 01:18:42.285083 update_engine[1560]: Jan 20 01:18:42.285083 update_engine[1560]: Jan 20 01:18:42.285083 update_engine[1560]: Jan 20 01:18:42.285083 update_engine[1560]: Jan 20 01:18:42.303611 update_engine[1560]: I20260120 01:18:42.285861 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:18:42.508358 update_engine[1560]: I20260120 01:18:42.478730 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:18:42.522452 update_engine[1560]: I20260120 01:18:42.522290 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:18:42.523238 locksmithd[1622]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 01:18:42.543228 update_engine[1560]: E20260120 01:18:42.542457 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:18:42.543228 update_engine[1560]: I20260120 01:18:42.543064 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 01:18:42.846436 kubelet[2638]: E0120 01:18:42.837711 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:42.861542 kubelet[2638]: E0120 01:18:42.860922 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:42.861701 kubelet[2638]: E0120 01:18:42.861568 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:42.862212 kubelet[2638]: E0120 01:18:42.848084 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:42.862462 kubelet[2638]: E0120 01:18:42.862439 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:42.891072 kubelet[2638]: E0120 01:18:42.879600 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:43.315640 kubelet[2638]: E0120 01:18:43.309624 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:18:43.824444 kubelet[2638]: E0120 01:18:43.824311 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:43.824973 kubelet[2638]: E0120 01:18:43.824590 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:45.086000 kubelet[2638]: I0120 01:18:45.072022 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:18:45.218051 kubelet[2638]: E0120 01:18:45.212744 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:45.218577 kubelet[2638]: E0120 01:18:45.218553 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:46.007006 kubelet[2638]: E0120 01:18:46.005254 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:46.021182 kubelet[2638]: E0120 01:18:46.020631 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:48.300654 kubelet[2638]: E0120 01:18:48.300375 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:48.305985 kubelet[2638]: E0120 01:18:48.305717 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:53.175239 update_engine[1560]: I20260120 01:18:53.169451 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:18:53.200619 update_engine[1560]: I20260120 01:18:53.170015 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:18:53.209725 update_engine[1560]: I20260120 01:18:53.203516 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:18:53.284528 update_engine[1560]: E20260120 01:18:53.269433 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:18:53.284528 update_engine[1560]: I20260120 01:18:53.269613 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 01:18:53.319580 kubelet[2638]: E0120 01:18:53.319530 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:18:55.140579 kubelet[2638]: E0120 01:18:55.129215 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:18:56.523676 kubelet[2638]: E0120 01:18:56.516557 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 20 01:18:58.308381 kubelet[2638]: E0120 01:18:58.305941 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:18:58.319360 kubelet[2638]: E0120 01:18:58.312738 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:18:59.097581 kubelet[2638]: E0120 01:18:58.950673 2638 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:18:59.155419 kubelet[2638]: E0120 01:18:59.150283 2638 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Jan 20 01:19:02.089507 kubelet[2638]: E0120 01:19:02.083334 2638 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4b9e9f9b720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,LastTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:19:02.293165 kubelet[2638]: W0120 01:19:02.279933 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 01:19:02.293165 kubelet[2638]: E0120 01:19:02.280296 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:19:03.131110 kubelet[2638]: I0120 01:19:03.130692 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:19:03.207507 update_engine[1560]: I20260120 01:19:03.168736 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:19:03.214691 update_engine[1560]: I20260120 01:19:03.214646 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:19:03.226528 update_engine[1560]: I20260120 01:19:03.226476 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:19:03.277733 update_engine[1560]: E20260120 01:19:03.277654 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:19:03.278540 update_engine[1560]: I20260120 01:19:03.278501 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 01:19:03.352300 kubelet[2638]: E0120 01:19:03.348693 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:05.215876 kubelet[2638]: W0120 01:19:05.195682 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 01:19:05.232606 kubelet[2638]: E0120 01:19:05.232550 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:19:10.478547 kubelet[2638]: W0120 01:19:10.472665 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 01:19:10.590270 kubelet[2638]: E0120 01:19:10.581608 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:19:12.480106 kubelet[2638]: W0120 01:19:12.472170 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 01:19:12.517246 kubelet[2638]: E0120 01:19:12.505622 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:19:13.352629 update_engine[1560]: I20260120 01:19:13.231740 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.360561 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.369740 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:19:13.415338 update_engine[1560]: E20260120 01:19:13.398718 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.399129 1560 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.399153 1560 omaha_request_action.cc:617] Omaha request response: Jan 20 01:19:13.415338 update_engine[1560]: E20260120 01:19:13.399475 1560 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.399621 1560 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.399636 1560 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.399647 1560 update_attempter.cc:306] Processing Done. Jan 20 01:19:13.415338 update_engine[1560]: E20260120 01:19:13.399744 1560 update_attempter.cc:619] Update failed. Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.406407 1560 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.406430 1560 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.406441 1560 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.406680 1560 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.406723 1560 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:19:13.415338 update_engine[1560]: I20260120 01:19:13.406734 1560 omaha_request_action.cc:272] Request: Jan 20 01:19:13.415338 update_engine[1560]: Jan 20 01:19:13.415338 update_engine[1560]: Jan 20 01:19:13.469455 locksmithd[1622]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 01:19:13.469455 locksmithd[1622]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 01:19:13.471602 kubelet[2638]: E0120 01:19:13.381507 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:13.471602 kubelet[2638]: E0120 01:19:13.404309 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:19:13.472304 update_engine[1560]: Jan 20 01:19:13.472304 update_engine[1560]: Jan 20 01:19:13.472304 update_engine[1560]: Jan 20 01:19:13.472304 update_engine[1560]: Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.406745 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.409718 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.426609 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:19:13.472304 update_engine[1560]: E20260120 01:19:13.461443 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.461571 1560 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.461592 1560 omaha_request_action.cc:617] Omaha request response: Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.461606 1560 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.461619 1560 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.461630 1560 update_attempter.cc:306] Processing Done. Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.461647 1560 update_attempter.cc:310] Error event sent. Jan 20 01:19:13.472304 update_engine[1560]: I20260120 01:19:13.461665 1560 update_check_scheduler.cc:74] Next update check in 40m0s Jan 20 01:19:13.907723 kubelet[2638]: E0120 01:19:13.904364 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 01:19:20.785448 kubelet[2638]: I0120 01:19:20.779272 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:19:22.322153 kubelet[2638]: E0120 01:19:22.310215 2638 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4b9e9f9b720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,LastTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:19:22.847662 kubelet[2638]: E0120 01:19:22.847609 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:19:22.848608 kubelet[2638]: E0120 01:19:22.848505 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:19:23.469243 kubelet[2638]: E0120 01:19:23.432233 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:36.096593 kubelet[2638]: E0120 01:19:36.003099 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 20 01:19:36.096593 kubelet[2638]: E0120 01:19:36.036301 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:36.198298 kubelet[2638]: E0120 01:19:36.106543 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:19:43.667414 kubelet[2638]: I0120 01:19:43.662544 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:19:46.040103 kubelet[2638]: E0120 01:19:46.039254 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:46.104413 kubelet[2638]: E0120 01:19:46.102443 2638 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:19:46.104413 kubelet[2638]: E0120 01:19:46.096642 2638 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4b9e9f9b720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,LastTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:19:46.156324 kubelet[2638]: W0120 01:19:46.154309 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 01:19:46.156324 kubelet[2638]: E0120 01:19:46.154410 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:19:53.310379 kubelet[2638]: E0120 01:19:53.300024 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 01:19:53.696067 kubelet[2638]: W0120 01:19:53.438512 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 01:19:53.696067 kubelet[2638]: E0120 01:19:53.449271 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:19:53.723149 kubelet[2638]: E0120 01:19:53.723098 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:19:53.729013 kubelet[2638]: W0120 01:19:53.724147 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 01:19:53.729013 kubelet[2638]: E0120 01:19:53.724408 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:19:55.304728 kubelet[2638]: W0120 01:19:55.299194 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 01:19:55.414323 kubelet[2638]: E0120 01:19:55.387603 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:19:56.055065 kubelet[2638]: E0120 01:19:56.055010 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:19:58.002583 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 01:20:01.386100 systemd-tmpfiles[2923]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:20:01.394135 systemd-tmpfiles[2923]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:20:01.395317 systemd-tmpfiles[2923]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:20:01.405390 systemd-tmpfiles[2923]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:20:02.235551 systemd-tmpfiles[2923]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:20:02.255364 systemd-tmpfiles[2923]: ACLs are not supported, ignoring. Jan 20 01:20:02.255465 systemd-tmpfiles[2923]: ACLs are not supported, ignoring. Jan 20 01:20:03.196552 systemd-tmpfiles[2923]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:20:03.196696 systemd-tmpfiles[2923]: Skipping /boot Jan 20 01:20:04.677435 kubelet[2638]: I0120 01:20:04.613535 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:20:05.499320 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 01:20:05.503700 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 01:20:05.966586 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jan 20 01:20:06.081487 kubelet[2638]: E0120 01:20:06.081301 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:20:06.172132 kubelet[2638]: E0120 01:20:06.164712 2638 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4b9e9f9b720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,LastTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:20:08.524573 kubelet[2638]: E0120 01:20:08.514517 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:20:08.548325 kubelet[2638]: E0120 01:20:08.542161 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:20:10.635706 kubelet[2638]: E0120 01:20:10.627720 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 01:20:16.305581 kubelet[2638]: E0120 01:20:16.285153 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:20:17.312123 kubelet[2638]: E0120 01:20:17.299718 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:20:17.347689 kubelet[2638]: E0120 01:20:17.338738 2638 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 01:20:23.935543 kernel: sched: DL replenish lagged too much Jan 20 01:20:26.372174 kubelet[2638]: I0120 01:20:26.371680 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:20:26.377078 kubelet[2638]: E0120 01:20:26.376644 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:20:26.377493 kubelet[2638]: E0120 01:20:26.377469 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:20:29.310727 kubelet[2638]: E0120 01:20:29.193266 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:20:29.635446 kubelet[2638]: E0120 01:20:29.621356 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 20 01:20:33.183614 kubelet[2638]: I0120 01:20:33.179143 2638 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:20:33.183614 kubelet[2638]: E0120 01:20:33.179197 2638 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:20:33.287630 kubelet[2638]: E0120 01:20:33.277441 2638 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4b9e9f9b720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,LastTimestamp:2026-01-20 01:18:18.304442895 +0000 UTC m=+6.376801667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:20:33.448229 kubelet[2638]: E0120 01:20:33.447561 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:20:33.551234 kubelet[2638]: E0120 01:20:33.549416 2638 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Jan 20 01:20:35.798223 kubelet[2638]: I0120 01:20:35.787749 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 01:20:35.930228 kubelet[2638]: E0120 01:20:35.915739 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:20:35.970146 kubelet[2638]: I0120 01:20:35.970086 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 01:20:37.040601 kubelet[2638]: I0120 01:20:37.032865 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:20:37.040601 kubelet[2638]: I0120 01:20:37.192435 2638 apiserver.go:52] "Watching apiserver" Jan 20 01:20:38.176240 kubelet[2638]: E0120 01:20:38.165427 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:20:38.176240 kubelet[2638]: E0120 01:20:38.168375 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:20:38.310364 kubelet[2638]: I0120 01:20:38.304710 2638 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:20:39.431676 kubelet[2638]: E0120 01:20:39.411494 2638 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.115s" Jan 20 01:20:39.477253 kubelet[2638]: E0120 01:20:39.471187 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:20:41.095695 kubelet[2638]: E0120 01:20:41.090601 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:20:43.338439 kubelet[2638]: I0120 01:20:43.317037 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.306689592 podStartE2EDuration="8.306689592s" podCreationTimestamp="2026-01-20 01:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:20:43.306194786 +0000 UTC m=+151.378553518" watchObservedRunningTime="2026-01-20 01:20:43.306689592 +0000 UTC m=+151.379048324" Jan 20 01:20:43.401217 kubelet[2638]: E0120 01:20:43.397328 2638 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.081s" Jan 20 01:20:45.134122 kubelet[2638]: I0120 01:20:45.133504 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.133429427 podStartE2EDuration="7.133429427s" podCreationTimestamp="2026-01-20 01:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:20:44.771610866 +0000 UTC m=+152.843969619" watchObservedRunningTime="2026-01-20 01:20:45.133429427 +0000 UTC m=+153.205788148" Jan 20 01:20:47.122133 kubelet[2638]: E0120 01:20:47.101675 2638 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.775s" Jan 20 01:20:47.180123 kubelet[2638]: E0120 01:20:47.134602 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:20:47.964918 kubelet[2638]: I0120 01:20:47.960545 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=12.960467664 podStartE2EDuration="12.960467664s" podCreationTimestamp="2026-01-20 01:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:20:47.960392505 +0000 UTC m=+156.032751257" watchObservedRunningTime="2026-01-20 01:20:47.960467664 +0000 UTC m=+156.032826386" Jan 20 01:20:52.464339 kubelet[2638]: E0120 01:20:52.362446 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:20:58.177436 kubelet[2638]: E0120 01:20:58.173926 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:09.067690 kubelet[2638]: E0120 01:21:09.056716 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:09.131350 kubelet[2638]: E0120 01:21:09.131307 2638 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.2s" Jan 20 01:21:14.123198 kubelet[2638]: E0120 01:21:14.117437 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:19.134514 kubelet[2638]: E0120 01:21:19.131686 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:25.290965 systemd[1]: Reload requested from client PID 2934 ('systemctl') (unit session-5.scope)... Jan 20 01:21:25.294418 systemd[1]: Reloading... Jan 20 01:21:25.392344 kubelet[2638]: E0120 01:21:25.385592 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:31.687890 kubelet[2638]: E0120 01:21:31.681424 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:32.573288 zram_generator::config[2982]: No configuration found. Jan 20 01:21:32.600493 kubelet[2638]: E0120 01:21:32.600426 2638 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.016s" Jan 20 01:21:35.661507 kubelet[2638]: E0120 01:21:35.657594 2638 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.056s" Jan 20 01:21:35.757414 kubelet[2638]: E0120 01:21:35.738470 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:36.595567 kubelet[2638]: E0120 01:21:36.592484 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:36.794929 kubelet[2638]: E0120 01:21:36.792495 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:21:39.207406 systemd[1]: Reloading finished in 13903 ms. Jan 20 01:21:40.308224 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:21:40.551401 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:21:40.558665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:21:40.558746 systemd[1]: kubelet.service: Consumed 26.335s CPU time, 141M memory peak. Jan 20 01:21:40.640427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:21:47.119725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:21:47.265471 (kubelet)[3024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:21:50.085240 kubelet[3024]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:21:50.112432 kubelet[3024]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:21:50.112432 kubelet[3024]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:21:50.112432 kubelet[3024]: I0120 01:21:50.096623 3024 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:21:50.426678 kubelet[3024]: I0120 01:21:50.426615 3024 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:21:50.427221 kubelet[3024]: I0120 01:21:50.427195 3024 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:21:50.467692 kubelet[3024]: I0120 01:21:50.467641 3024 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:21:50.515389 kubelet[3024]: I0120 01:21:50.509403 3024 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 01:21:51.289081 kubelet[3024]: I0120 01:21:51.281634 3024 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:21:51.509445 kubelet[3024]: I0120 01:21:51.501295 3024 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:21:52.228580 kubelet[3024]: I0120 01:21:52.221720 3024 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:21:52.291450 kubelet[3024]: I0120 01:21:52.291367 3024 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:21:52.302125 kubelet[3024]: I0120 01:21:52.295442 3024 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:21:52.309134 kubelet[3024]: I0120 01:21:52.305535 3024 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:21:52.309134 kubelet[3024]: I0120 01:21:52.305578 3024 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:21:52.309134 kubelet[3024]: I0120 01:21:52.305677 3024 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:21:52.350144 kubelet[3024]: I0120 01:21:52.350087 3024 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:21:52.350447 kubelet[3024]: I0120 01:21:52.350420 3024 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:21:52.350615 kubelet[3024]: I0120 01:21:52.350594 3024 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:21:52.350718 kubelet[3024]: I0120 01:21:52.350698 3024 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:21:52.407382 kubelet[3024]: I0120 01:21:52.407308 3024 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:21:52.489010 kubelet[3024]: I0120 01:21:52.462394 3024 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:21:52.489010 kubelet[3024]: I0120 01:21:52.466426 3024 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:21:52.489010 kubelet[3024]: I0120 01:21:52.466469 3024 server.go:1287] "Started kubelet" Jan 20 01:21:52.600089 kubelet[3024]: I0120 01:21:52.586274 3024 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:21:52.600089 kubelet[3024]: I0120 01:21:52.590364 3024 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:21:52.617212 kubelet[3024]: I0120 01:21:52.617170 3024 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:21:52.676630 kubelet[3024]: I0120 01:21:52.676589 3024 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:21:52.718322 kubelet[3024]: I0120 01:21:52.715262 3024 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:21:52.728209 kubelet[3024]: I0120 01:21:52.728171 3024 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:21:52.746442 kubelet[3024]: I0120 01:21:52.734652 3024 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:21:52.755632 kubelet[3024]: E0120 01:21:52.747417 3024 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:21:52.783121 kubelet[3024]: I0120 01:21:52.774055 3024 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:21:52.783121 kubelet[3024]: I0120 01:21:52.777619 3024 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:21:52.883630 kubelet[3024]: I0120 01:21:52.880484 3024 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:21:52.943199 kubelet[3024]: E0120 01:21:52.924631 3024 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:21:52.953397 kubelet[3024]: E0120 01:21:52.953358 3024 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:21:53.012154 kubelet[3024]: I0120 01:21:53.004623 3024 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:21:53.018257 kubelet[3024]: I0120 01:21:53.012370 3024 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:21:53.400491 kubelet[3024]: I0120 01:21:53.400452 3024 apiserver.go:52] "Watching apiserver" Jan 20 01:21:54.255485 kubelet[3024]: I0120 01:21:54.251355 3024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:21:54.732449 kubelet[3024]: I0120 01:21:54.729569 3024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:21:54.732449 kubelet[3024]: I0120 01:21:54.729747 3024 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:21:54.732449 kubelet[3024]: I0120 01:21:54.731468 3024 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:21:54.732449 kubelet[3024]: I0120 01:21:54.731483 3024 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:21:54.798493 kubelet[3024]: E0120 01:21:54.740411 3024 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:21:54.926370 kubelet[3024]: E0120 01:21:54.925657 3024 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:21:55.133310 kubelet[3024]: E0120 01:21:55.128707 3024 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:21:55.564094 kubelet[3024]: E0120 01:21:55.538630 3024 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:21:55.830548 kubelet[3024]: I0120 01:21:55.824307 3024 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:21:55.830548 kubelet[3024]: I0120 01:21:55.824343 3024 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:21:55.830548 kubelet[3024]: I0120 01:21:55.824504 3024 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:21:55.830548 kubelet[3024]: I0120 01:21:55.827510 3024 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:21:55.830548 kubelet[3024]: I0120 01:21:55.827540 3024 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:21:55.830548 kubelet[3024]: I0120 01:21:55.827582 3024 policy_none.go:49] "None policy: Start" Jan 20 01:21:55.830548 kubelet[3024]: I0120 01:21:55.827599 3024 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:21:55.830548 kubelet[3024]: I0120 01:21:55.827623 3024 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:21:55.830548 kubelet[3024]: I0120 01:21:55.828533 3024 state_mem.go:75] "Updated machine memory state" Jan 20 01:21:55.963528 kubelet[3024]: I0120 01:21:55.952164 3024 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:21:55.963528 kubelet[3024]: I0120 01:21:55.952472 3024 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:21:55.963528 kubelet[3024]: I0120 01:21:55.952495 3024 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:21:55.963528 kubelet[3024]: I0120 01:21:55.953736 3024 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:21:55.979395 kubelet[3024]: I0120 01:21:55.979353 3024 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:21:55.988202 containerd[1591]: time="2026-01-20T01:21:55.987327943Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:21:55.999032 kubelet[3024]: E0120 01:21:55.995573 3024 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:21:56.103363 kubelet[3024]: I0120 01:21:56.093188 3024 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:21:56.396506 kubelet[3024]: I0120 01:21:56.389150 3024 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:21:56.499459 kubelet[3024]: I0120 01:21:56.494702 3024 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:21:56.587445 kubelet[3024]: I0120 01:21:56.587398 3024 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:21:56.638445 kubelet[3024]: I0120 01:21:56.594385 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:21:56.658297 kubelet[3024]: I0120 01:21:56.638719 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:21:56.669268 systemd[1]: Created slice kubepods-besteffort-pode037eb83_a61e_4eec_9b89_30759b613051.slice - libcontainer container kubepods-besteffort-pode037eb83_a61e_4eec_9b89_30759b613051.slice. Jan 20 01:21:56.727559 kubelet[3024]: I0120 01:21:56.713731 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:21:56.779215 kubelet[3024]: I0120 01:21:56.778647 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:21:56.792308 kubelet[3024]: I0120 01:21:56.787681 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:21:56.793677 kubelet[3024]: I0120 01:21:56.787743 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:21:56.805343 kubelet[3024]: I0120 01:21:56.796303 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:21:56.805343 kubelet[3024]: I0120 01:21:56.796364 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:21:56.805343 kubelet[3024]: I0120 01:21:56.796398 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:21:56.805343 kubelet[3024]: I0120 01:21:56.796428 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e037eb83-a61e-4eec-9b89-30759b613051-kube-proxy\") pod \"kube-proxy-jjzrb\" (UID: \"e037eb83-a61e-4eec-9b89-30759b613051\") " pod="kube-system/kube-proxy-jjzrb" Jan 20 01:21:56.805343 kubelet[3024]: I0120 01:21:56.796616 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e037eb83-a61e-4eec-9b89-30759b613051-xtables-lock\") pod \"kube-proxy-jjzrb\" (UID: \"e037eb83-a61e-4eec-9b89-30759b613051\") " pod="kube-system/kube-proxy-jjzrb" Jan 20 01:21:56.805613 kubelet[3024]: I0120 01:21:56.796652 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e037eb83-a61e-4eec-9b89-30759b613051-lib-modules\") pod \"kube-proxy-jjzrb\" (UID: \"e037eb83-a61e-4eec-9b89-30759b613051\") " pod="kube-system/kube-proxy-jjzrb" Jan 20 01:21:56.805613 kubelet[3024]: I0120 01:21:56.796682 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjvg\" (UniqueName: \"kubernetes.io/projected/e037eb83-a61e-4eec-9b89-30759b613051-kube-api-access-7fjvg\") pod \"kube-proxy-jjzrb\" (UID: \"e037eb83-a61e-4eec-9b89-30759b613051\") " pod="kube-system/kube-proxy-jjzrb" Jan 20 01:21:57.473529 kubelet[3024]: E0120 01:21:57.472349 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:57.549509 kubelet[3024]: E0120 01:21:57.549454 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:57.884329 kubelet[3024]: I0120 01:21:57.857578 3024 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 01:21:57.884329 kubelet[3024]: I0120 01:21:57.860510 3024 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:21:58.248406 kubelet[3024]: E0120 01:21:58.194700 3024 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 01:21:58.248406 kubelet[3024]: E0120 01:21:58.200320 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.465s" Jan 20 01:21:58.248406 kubelet[3024]: E0120 01:21:58.206491 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:59.126569 kubelet[3024]: E0120 01:21:59.094111 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:21:59.723650 containerd[1591]: time="2026-01-20T01:21:59.602569301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjzrb,Uid:e037eb83-a61e-4eec-9b89-30759b613051,Namespace:kube-system,Attempt:0,}" Jan 20 01:22:00.707264 kubelet[3024]: E0120 01:22:00.702671 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.812s" Jan 20 01:22:01.898340 containerd[1591]: time="2026-01-20T01:22:01.897166293Z" level=info msg="connecting to shim 147297391f352d5cadb2fe9a13bf1a2953df669a1d355d654117813d70373e6c" address="unix:///run/containerd/s/65b6d06b278a4a2b2f05f26b889a06a093ecef569d08beccb296b38a69a998d1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:22:04.483629 systemd[1]: Started cri-containerd-147297391f352d5cadb2fe9a13bf1a2953df669a1d355d654117813d70373e6c.scope - libcontainer container 147297391f352d5cadb2fe9a13bf1a2953df669a1d355d654117813d70373e6c. Jan 20 01:22:10.118891 containerd[1591]: time="2026-01-20T01:22:10.118246591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjzrb,Uid:e037eb83-a61e-4eec-9b89-30759b613051,Namespace:kube-system,Attempt:0,} returns sandbox id \"147297391f352d5cadb2fe9a13bf1a2953df669a1d355d654117813d70373e6c\"" Jan 20 01:22:10.419745 kubelet[3024]: E0120 01:22:10.414347 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.062s" Jan 20 01:22:10.687257 containerd[1591]: time="2026-01-20T01:22:10.682650802Z" level=info msg="CreateContainer within sandbox \"147297391f352d5cadb2fe9a13bf1a2953df669a1d355d654117813d70373e6c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:22:11.719843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631187943.mount: Deactivated successfully. Jan 20 01:22:11.808421 containerd[1591]: time="2026-01-20T01:22:11.808354697Z" level=info msg="Container 2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:22:11.820440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004853229.mount: Deactivated successfully. Jan 20 01:22:12.222111 containerd[1591]: time="2026-01-20T01:22:12.219578483Z" level=info msg="CreateContainer within sandbox \"147297391f352d5cadb2fe9a13bf1a2953df669a1d355d654117813d70373e6c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0\"" Jan 20 01:22:12.234121 containerd[1591]: time="2026-01-20T01:22:12.233509809Z" level=info msg="StartContainer for \"2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0\"" Jan 20 01:22:12.246195 containerd[1591]: time="2026-01-20T01:22:12.245647295Z" level=info msg="connecting to shim 2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0" address="unix:///run/containerd/s/65b6d06b278a4a2b2f05f26b889a06a093ecef569d08beccb296b38a69a998d1" protocol=ttrpc version=3 Jan 20 01:22:14.219240 kubelet[3024]: E0120 01:22:14.213440 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.348s" Jan 20 01:22:14.396373 systemd[1]: Started cri-containerd-2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0.scope - libcontainer container 2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0. Jan 20 01:22:16.469225 containerd[1591]: time="2026-01-20T01:22:16.466696701Z" level=error msg="get state for 2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0" error="context deadline exceeded" Jan 20 01:22:16.469225 containerd[1591]: time="2026-01-20T01:22:16.467036061Z" level=warning msg="unknown status" status=0 Jan 20 01:22:16.721209 containerd[1591]: time="2026-01-20T01:22:16.720518101Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:22:17.106268 containerd[1591]: time="2026-01-20T01:22:17.105730546Z" level=info msg="StartContainer for \"2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0\" returns successfully" Jan 20 01:22:17.910033 kubelet[3024]: I0120 01:22:17.907834 3024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jjzrb" podStartSLOduration=24.905720137 podStartE2EDuration="24.905720137s" podCreationTimestamp="2026-01-20 01:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:22:17.650318972 +0000 UTC m=+29.452057876" watchObservedRunningTime="2026-01-20 01:22:17.905720137 +0000 UTC m=+29.707459041" Jan 20 01:22:18.112327 kubelet[3024]: I0120 01:22:18.105303 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rqg\" (UniqueName: \"kubernetes.io/projected/01d8de96-f066-41eb-bac3-effea5fe8330-kube-api-access-w9rqg\") pod \"kube-flannel-ds-mw4ng\" (UID: \"01d8de96-f066-41eb-bac3-effea5fe8330\") " pod="kube-flannel/kube-flannel-ds-mw4ng" Jan 20 01:22:18.112327 kubelet[3024]: I0120 01:22:18.105428 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/01d8de96-f066-41eb-bac3-effea5fe8330-cni\") pod \"kube-flannel-ds-mw4ng\" (UID: \"01d8de96-f066-41eb-bac3-effea5fe8330\") " pod="kube-flannel/kube-flannel-ds-mw4ng" Jan 20 01:22:18.112327 kubelet[3024]: I0120 01:22:18.105456 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/01d8de96-f066-41eb-bac3-effea5fe8330-run\") pod \"kube-flannel-ds-mw4ng\" (UID: \"01d8de96-f066-41eb-bac3-effea5fe8330\") " pod="kube-flannel/kube-flannel-ds-mw4ng" Jan 20 01:22:18.112327 kubelet[3024]: I0120 01:22:18.105477 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/01d8de96-f066-41eb-bac3-effea5fe8330-cni-plugin\") pod \"kube-flannel-ds-mw4ng\" (UID: \"01d8de96-f066-41eb-bac3-effea5fe8330\") " pod="kube-flannel/kube-flannel-ds-mw4ng" Jan 20 01:22:18.112327 kubelet[3024]: I0120 01:22:18.105501 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/01d8de96-f066-41eb-bac3-effea5fe8330-flannel-cfg\") pod \"kube-flannel-ds-mw4ng\" (UID: \"01d8de96-f066-41eb-bac3-effea5fe8330\") " pod="kube-flannel/kube-flannel-ds-mw4ng" Jan 20 01:22:18.112657 kubelet[3024]: I0120 01:22:18.105522 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01d8de96-f066-41eb-bac3-effea5fe8330-xtables-lock\") pod \"kube-flannel-ds-mw4ng\" (UID: \"01d8de96-f066-41eb-bac3-effea5fe8330\") " pod="kube-flannel/kube-flannel-ds-mw4ng" Jan 20 01:22:18.190705 systemd[1]: Created slice kubepods-burstable-pod01d8de96_f066_41eb_bac3_effea5fe8330.slice - libcontainer container kubepods-burstable-pod01d8de96_f066_41eb_bac3_effea5fe8330.slice. Jan 20 01:22:18.605344 containerd[1591]: time="2026-01-20T01:22:18.591496976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mw4ng,Uid:01d8de96-f066-41eb-bac3-effea5fe8330,Namespace:kube-flannel,Attempt:0,}" Jan 20 01:22:19.919141 containerd[1591]: time="2026-01-20T01:22:19.873690694Z" level=info msg="connecting to shim 483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8" address="unix:///run/containerd/s/ea8b62db1e7e40f44325201a7d51b2400f6c04117484103269fc951a36f9b9f8" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:22:19.995358 sudo[1770]: pam_unix(sudo:session): session closed for user root Jan 20 01:22:20.028077 sshd[1769]: Connection closed by 10.0.0.1 port 56332 Jan 20 01:22:20.073997 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 20 01:22:20.129590 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:56332.service: Deactivated successfully. Jan 20 01:22:20.170009 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:22:20.174609 systemd[1]: session-5.scope: Consumed 26.617s CPU time, 230.3M memory peak. Jan 20 01:22:20.257083 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:22:20.268587 systemd-logind[1554]: Removed session 5. Jan 20 01:22:20.546367 systemd[1]: Started cri-containerd-483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8.scope - libcontainer container 483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8. Jan 20 01:22:21.983080 containerd[1591]: time="2026-01-20T01:22:21.982602442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mw4ng,Uid:01d8de96-f066-41eb-bac3-effea5fe8330,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8\"" Jan 20 01:22:22.112562 containerd[1591]: time="2026-01-20T01:22:22.110441299Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 20 01:22:28.740130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904414498.mount: Deactivated successfully. Jan 20 01:22:30.868351 containerd[1591]: time="2026-01-20T01:22:30.857255470Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:22:30.995083 containerd[1591]: time="2026-01-20T01:22:30.990634888Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 20 01:22:31.007171 containerd[1591]: time="2026-01-20T01:22:31.006505372Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:22:31.089211 containerd[1591]: time="2026-01-20T01:22:31.086408279Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:22:31.090321 containerd[1591]: time="2026-01-20T01:22:31.090279143Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 8.979699008s" Jan 20 01:22:31.090476 containerd[1591]: time="2026-01-20T01:22:31.090455050Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 20 01:22:31.178234 containerd[1591]: time="2026-01-20T01:22:31.165469117Z" level=info msg="CreateContainer within sandbox \"483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 01:22:31.430248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605153282.mount: Deactivated successfully. Jan 20 01:22:31.547079 containerd[1591]: time="2026-01-20T01:22:31.544246989Z" level=info msg="Container 53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:22:31.694375 containerd[1591]: time="2026-01-20T01:22:31.693636841Z" level=info msg="CreateContainer within sandbox \"483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417\"" Jan 20 01:22:31.742507 containerd[1591]: time="2026-01-20T01:22:31.699259362Z" level=info msg="StartContainer for \"53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417\"" Jan 20 01:22:31.742507 containerd[1591]: time="2026-01-20T01:22:31.709084832Z" level=info msg="connecting to shim 53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417" address="unix:///run/containerd/s/ea8b62db1e7e40f44325201a7d51b2400f6c04117484103269fc951a36f9b9f8" protocol=ttrpc version=3 Jan 20 01:22:32.257650 systemd[1]: Started cri-containerd-53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417.scope - libcontainer container 53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417. Jan 20 01:22:33.463718 systemd[1]: cri-containerd-53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417.scope: Deactivated successfully. Jan 20 01:22:33.629495 containerd[1591]: time="2026-01-20T01:22:33.619685973Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01d8de96_f066_41eb_bac3_effea5fe8330.slice/cri-containerd-53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417.scope/memory.events\": no such file or directory" Jan 20 01:22:33.695054 containerd[1591]: time="2026-01-20T01:22:33.691377743Z" level=info msg="received container exit event container_id:\"53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417\" id:\"53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417\" pid:3361 exited_at:{seconds:1768872153 nanos:491728472}" Jan 20 01:22:33.858276 containerd[1591]: time="2026-01-20T01:22:33.835234011Z" level=info msg="StartContainer for \"53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417\" returns successfully" Jan 20 01:22:39.922680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417-rootfs.mount: Deactivated successfully. Jan 20 01:22:40.676328 containerd[1591]: time="2026-01-20T01:22:40.672435699Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 20 01:22:58.383627 systemd[1]: cri-containerd-1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2.scope: Deactivated successfully. Jan 20 01:22:58.399305 systemd[1]: cri-containerd-1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2.scope: Consumed 15.389s CPU time, 46.7M memory peak. Jan 20 01:22:58.857153 systemd[1]: cri-containerd-1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7.scope: Deactivated successfully. Jan 20 01:22:58.858676 systemd[1]: cri-containerd-1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7.scope: Consumed 19.948s CPU time, 22.3M memory peak. Jan 20 01:22:59.343364 containerd[1591]: time="2026-01-20T01:22:59.331595563Z" level=info msg="received container exit event container_id:\"1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2\" id:\"1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2\" pid:2865 exit_status:1 exited_at:{seconds:1768872179 nanos:222197925}" Jan 20 01:22:59.928704 containerd[1591]: time="2026-01-20T01:22:59.928556817Z" level=info msg="received container exit event container_id:\"1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7\" id:\"1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7\" pid:2858 exit_status:1 exited_at:{seconds:1768872179 nanos:583691099}" Jan 20 01:23:00.510559 kubelet[3024]: E0120 01:23:00.509986 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.818s" Jan 20 01:23:04.060707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2-rootfs.mount: Deactivated successfully. Jan 20 01:23:04.781430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7-rootfs.mount: Deactivated successfully. Jan 20 01:23:05.076679 kubelet[3024]: I0120 01:23:04.997011 3024 scope.go:117] "RemoveContainer" containerID="1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2" Jan 20 01:23:05.559143 containerd[1591]: time="2026-01-20T01:23:05.553687522Z" level=info msg="CreateContainer within sandbox \"5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 01:23:06.042646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount122392285.mount: Deactivated successfully. Jan 20 01:23:06.077114 containerd[1591]: time="2026-01-20T01:23:06.076715680Z" level=info msg="Container 9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:23:06.092661 kubelet[3024]: I0120 01:23:06.092525 3024 scope.go:117] "RemoveContainer" containerID="1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7" Jan 20 01:23:06.138617 containerd[1591]: time="2026-01-20T01:23:06.129714602Z" level=info msg="CreateContainer within sandbox \"99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 01:23:06.557514 containerd[1591]: time="2026-01-20T01:23:06.557451722Z" level=info msg="CreateContainer within sandbox \"5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0\"" Jan 20 01:23:06.610170 containerd[1591]: time="2026-01-20T01:23:06.610117912Z" level=info msg="StartContainer for \"9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0\"" Jan 20 01:23:06.679638 containerd[1591]: time="2026-01-20T01:23:06.679566384Z" level=info msg="connecting to shim 9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0" address="unix:///run/containerd/s/1194de9b12a812bb25e726bcb5ea7a195dff07cea00311d00768f9d9370d81ec" protocol=ttrpc version=3 Jan 20 01:23:06.930431 containerd[1591]: time="2026-01-20T01:23:06.930357531Z" level=info msg="Container 224668e1b6b230986e4901bdfa18acd1eedb68ae669fb2291786078eceb4260d: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:23:07.115357 containerd[1591]: time="2026-01-20T01:23:07.115296771Z" level=info msg="CreateContainer within sandbox \"99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"224668e1b6b230986e4901bdfa18acd1eedb68ae669fb2291786078eceb4260d\"" Jan 20 01:23:07.220024 containerd[1591]: time="2026-01-20T01:23:07.205675373Z" level=info msg="StartContainer for \"224668e1b6b230986e4901bdfa18acd1eedb68ae669fb2291786078eceb4260d\"" Jan 20 01:23:07.260497 containerd[1591]: time="2026-01-20T01:23:07.259740375Z" level=info msg="connecting to shim 224668e1b6b230986e4901bdfa18acd1eedb68ae669fb2291786078eceb4260d" address="unix:///run/containerd/s/a341fb622300a238784eef572cf7f25cfc1f3010a25b8719734374a0de627778" protocol=ttrpc version=3 Jan 20 01:23:07.993310 systemd[1]: Started cri-containerd-9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0.scope - libcontainer container 9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0. Jan 20 01:23:08.590588 systemd[1]: Started cri-containerd-224668e1b6b230986e4901bdfa18acd1eedb68ae669fb2291786078eceb4260d.scope - libcontainer container 224668e1b6b230986e4901bdfa18acd1eedb68ae669fb2291786078eceb4260d. Jan 20 01:23:10.132324 containerd[1591]: time="2026-01-20T01:23:10.130545507Z" level=error msg="get state for 9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0" error="context deadline exceeded" Jan 20 01:23:10.132324 containerd[1591]: time="2026-01-20T01:23:10.130694934Z" level=warning msg="unknown status" status=0 Jan 20 01:23:10.252744 containerd[1591]: time="2026-01-20T01:23:10.252655517Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:23:10.830415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769581647.mount: Deactivated successfully. Jan 20 01:23:13.738614 containerd[1591]: time="2026-01-20T01:23:13.696205746Z" level=info msg="StartContainer for \"9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0\" returns successfully" Jan 20 01:23:14.611593 containerd[1591]: time="2026-01-20T01:23:14.607191171Z" level=info msg="StartContainer for \"224668e1b6b230986e4901bdfa18acd1eedb68ae669fb2291786078eceb4260d\" returns successfully" Jan 20 01:23:32.828371 containerd[1591]: time="2026-01-20T01:23:32.704549577Z" level=warning msg="container event discarded" container=3c03cf2f578d10413476685573727f83d2a442ff9385f7d309cc59a86bd03338 type=CONTAINER_CREATED_EVENT Jan 20 01:23:33.076188 containerd[1591]: time="2026-01-20T01:23:32.989367335Z" level=warning msg="container event discarded" container=3c03cf2f578d10413476685573727f83d2a442ff9385f7d309cc59a86bd03338 type=CONTAINER_STARTED_EVENT Jan 20 01:23:33.097372 containerd[1591]: time="2026-01-20T01:23:33.084236410Z" level=warning msg="container event discarded" container=99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71 type=CONTAINER_CREATED_EVENT Jan 20 01:23:33.097372 containerd[1591]: time="2026-01-20T01:23:33.084285881Z" level=warning msg="container event discarded" container=99093e1f9e59853bc366913791a4bb39d5c1158c9cb7465564c198ba84df8c71 type=CONTAINER_STARTED_EVENT Jan 20 01:23:33.789679 kubelet[3024]: E0120 01:23:33.789426 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.031s" Jan 20 01:23:33.906593 containerd[1591]: time="2026-01-20T01:23:33.897123827Z" level=warning msg="container event discarded" container=5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84 type=CONTAINER_CREATED_EVENT Jan 20 01:23:33.935994 containerd[1591]: time="2026-01-20T01:23:33.930576011Z" level=warning msg="container event discarded" container=5f5a5dc46594672f07df6a4a03e04102608dcd5a54b3f3b454d7617db845bc84 type=CONTAINER_STARTED_EVENT Jan 20 01:23:33.969353 containerd[1591]: time="2026-01-20T01:23:33.969280192Z" level=warning msg="container event discarded" container=8f14a1b57d5b2140244cbedc9abf468a9b0721755bf562be9dd9941256650b77 type=CONTAINER_CREATED_EVENT Jan 20 01:23:34.938646 containerd[1591]: time="2026-01-20T01:23:34.938279741Z" level=warning msg="container event discarded" container=1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7 type=CONTAINER_CREATED_EVENT Jan 20 01:23:35.166569 containerd[1591]: time="2026-01-20T01:23:35.134486531Z" level=warning msg="container event discarded" container=1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2 type=CONTAINER_CREATED_EVENT Jan 20 01:23:38.440323 containerd[1591]: time="2026-01-20T01:23:38.440238646Z" level=warning msg="container event discarded" container=8f14a1b57d5b2140244cbedc9abf468a9b0721755bf562be9dd9941256650b77 type=CONTAINER_STARTED_EVENT Jan 20 01:23:39.551437 containerd[1591]: time="2026-01-20T01:23:39.547035995Z" level=warning msg="container event discarded" container=1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7 type=CONTAINER_STARTED_EVENT Jan 20 01:23:39.842578 containerd[1591]: time="2026-01-20T01:23:39.823602055Z" level=warning msg="container event discarded" container=1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2 type=CONTAINER_STARTED_EVENT Jan 20 01:23:52.849397 kubelet[3024]: E0120 01:23:52.848749 3024 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Jan 20 01:23:54.115066 kubelet[3024]: E0120 01:23:54.111198 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.262s" Jan 20 01:23:56.935970 kubelet[3024]: E0120 01:23:56.934989 3024 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:23:59.834886 kubelet[3024]: E0120 01:23:59.829105 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.092s" Jan 20 01:24:01.977522 containerd[1591]: time="2026-01-20T01:24:01.977458422Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:24:02.067171 containerd[1591]: time="2026-01-20T01:24:02.061366835Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Jan 20 01:24:02.129104 containerd[1591]: time="2026-01-20T01:24:02.128263570Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:24:02.202561 kubelet[3024]: E0120 01:24:02.202324 3024 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:24:04.409676 containerd[1591]: time="2026-01-20T01:24:04.409298395Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:24:04.476293 containerd[1591]: time="2026-01-20T01:24:04.437231751Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 1m23.760991932s" Jan 20 01:24:04.476293 containerd[1591]: time="2026-01-20T01:24:04.476058205Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 20 01:24:04.936373 containerd[1591]: time="2026-01-20T01:24:04.932503576Z" level=info msg="CreateContainer within sandbox \"483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:24:07.037088 kubelet[3024]: E0120 01:24:07.033516 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.082s" Jan 20 01:24:07.775114 kubelet[3024]: E0120 01:24:07.772111 3024 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:24:08.366664 kubelet[3024]: E0120 01:24:08.352486 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.2s" Jan 20 01:24:08.887434 containerd[1591]: time="2026-01-20T01:24:08.881160890Z" level=info msg="Container 95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:24:10.239507 containerd[1591]: time="2026-01-20T01:24:10.123311557Z" level=info msg="CreateContainer within sandbox \"483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f\"" Jan 20 01:24:10.239507 containerd[1591]: time="2026-01-20T01:24:10.175953181Z" level=info msg="StartContainer for \"95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f\"" Jan 20 01:24:10.460975 containerd[1591]: time="2026-01-20T01:24:10.460740864Z" level=info msg="connecting to shim 95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f" address="unix:///run/containerd/s/ea8b62db1e7e40f44325201a7d51b2400f6c04117484103269fc951a36f9b9f8" protocol=ttrpc version=3 Jan 20 01:24:12.384253 systemd[1]: Started cri-containerd-95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f.scope - libcontainer container 95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f. Jan 20 01:24:13.143269 kubelet[3024]: E0120 01:24:13.119170 3024 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:24:14.602417 systemd[1]: cri-containerd-95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f.scope: Deactivated successfully. Jan 20 01:24:14.625002 containerd[1591]: time="2026-01-20T01:24:14.622722014Z" level=info msg="StartContainer for \"95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f\" returns successfully" Jan 20 01:24:14.644027 containerd[1591]: time="2026-01-20T01:24:14.643681635Z" level=info msg="received container exit event container_id:\"95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f\" id:\"95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f\" pid:3553 exited_at:{seconds:1768872254 nanos:633447501}" Jan 20 01:24:15.578236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f-rootfs.mount: Deactivated successfully. Jan 20 01:24:16.828940 containerd[1591]: time="2026-01-20T01:24:16.815415581Z" level=info msg="CreateContainer within sandbox \"483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 01:24:17.172381 containerd[1591]: time="2026-01-20T01:24:17.172313739Z" level=info msg="Container e9dab379d936c18e75d843bcbf0cb34c81a5547edeefe66ea04660a0330b033d: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:24:17.184118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428407943.mount: Deactivated successfully. Jan 20 01:24:17.361018 containerd[1591]: time="2026-01-20T01:24:17.357459049Z" level=info msg="CreateContainer within sandbox \"483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e9dab379d936c18e75d843bcbf0cb34c81a5547edeefe66ea04660a0330b033d\"" Jan 20 01:24:17.367066 containerd[1591]: time="2026-01-20T01:24:17.367014430Z" level=info msg="StartContainer for \"e9dab379d936c18e75d843bcbf0cb34c81a5547edeefe66ea04660a0330b033d\"" Jan 20 01:24:17.389037 containerd[1591]: time="2026-01-20T01:24:17.386502878Z" level=info msg="connecting to shim e9dab379d936c18e75d843bcbf0cb34c81a5547edeefe66ea04660a0330b033d" address="unix:///run/containerd/s/ea8b62db1e7e40f44325201a7d51b2400f6c04117484103269fc951a36f9b9f8" protocol=ttrpc version=3 Jan 20 01:24:17.881232 systemd[1]: Started cri-containerd-e9dab379d936c18e75d843bcbf0cb34c81a5547edeefe66ea04660a0330b033d.scope - libcontainer container e9dab379d936c18e75d843bcbf0cb34c81a5547edeefe66ea04660a0330b033d. Jan 20 01:24:18.616287 containerd[1591]: time="2026-01-20T01:24:18.616140996Z" level=info msg="StartContainer for \"e9dab379d936c18e75d843bcbf0cb34c81a5547edeefe66ea04660a0330b033d\" returns successfully" Jan 20 01:24:21.596367 systemd-networkd[1485]: flannel.1: Link UP Jan 20 01:24:21.596382 systemd-networkd[1485]: flannel.1: Gained carrier Jan 20 01:24:23.463005 systemd-networkd[1485]: flannel.1: Gained IPv6LL Jan 20 01:24:25.832200 kubelet[3024]: I0120 01:24:25.831037 3024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-mw4ng" podStartSLOduration=26.174199491 podStartE2EDuration="2m8.831010622s" podCreationTimestamp="2026-01-20 01:22:17 +0000 UTC" firstStartedPulling="2026-01-20 01:22:22.087405004 +0000 UTC m=+33.889143899" lastFinishedPulling="2026-01-20 01:24:04.744216137 +0000 UTC m=+136.545955030" observedRunningTime="2026-01-20 01:24:19.299729478 +0000 UTC m=+151.101468382" watchObservedRunningTime="2026-01-20 01:24:25.831010622 +0000 UTC m=+157.632749516" Jan 20 01:24:25.979354 systemd[1]: Created slice kubepods-burstable-pod4dabd437_ad8b_43e3_8171_6886c8d73dd9.slice - libcontainer container kubepods-burstable-pod4dabd437_ad8b_43e3_8171_6886c8d73dd9.slice. Jan 20 01:24:26.020201 kubelet[3024]: I0120 01:24:26.020150 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjhjb\" (UniqueName: \"kubernetes.io/projected/60215ed4-4983-49ba-a00f-9c33cd98d1ec-kube-api-access-pjhjb\") pod \"coredns-668d6bf9bc-z5nkf\" (UID: \"60215ed4-4983-49ba-a00f-9c33cd98d1ec\") " pod="kube-system/coredns-668d6bf9bc-z5nkf" Jan 20 01:24:26.020927 kubelet[3024]: I0120 01:24:26.020600 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60215ed4-4983-49ba-a00f-9c33cd98d1ec-config-volume\") pod \"coredns-668d6bf9bc-z5nkf\" (UID: \"60215ed4-4983-49ba-a00f-9c33cd98d1ec\") " pod="kube-system/coredns-668d6bf9bc-z5nkf" Jan 20 01:24:26.020927 kubelet[3024]: I0120 01:24:26.020650 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dabd437-ad8b-43e3-8171-6886c8d73dd9-config-volume\") pod \"coredns-668d6bf9bc-kv4qq\" (UID: \"4dabd437-ad8b-43e3-8171-6886c8d73dd9\") " pod="kube-system/coredns-668d6bf9bc-kv4qq" Jan 20 01:24:26.020927 kubelet[3024]: I0120 01:24:26.020683 3024 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k99dl\" (UniqueName: \"kubernetes.io/projected/4dabd437-ad8b-43e3-8171-6886c8d73dd9-kube-api-access-k99dl\") pod \"coredns-668d6bf9bc-kv4qq\" (UID: \"4dabd437-ad8b-43e3-8171-6886c8d73dd9\") " pod="kube-system/coredns-668d6bf9bc-kv4qq" Jan 20 01:24:26.093073 systemd[1]: Created slice kubepods-burstable-pod60215ed4_4983_49ba_a00f_9c33cd98d1ec.slice - libcontainer container kubepods-burstable-pod60215ed4_4983_49ba_a00f_9c33cd98d1ec.slice. Jan 20 01:24:26.354095 containerd[1591]: time="2026-01-20T01:24:26.353685888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kv4qq,Uid:4dabd437-ad8b-43e3-8171-6886c8d73dd9,Namespace:kube-system,Attempt:0,}" Jan 20 01:24:26.449024 containerd[1591]: time="2026-01-20T01:24:26.448953573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5nkf,Uid:60215ed4-4983-49ba-a00f-9c33cd98d1ec,Namespace:kube-system,Attempt:0,}" Jan 20 01:24:26.947635 systemd-networkd[1485]: cni0: Link UP Jan 20 01:24:26.947644 systemd-networkd[1485]: cni0: Gained carrier Jan 20 01:24:26.970019 systemd-networkd[1485]: cni0: Lost carrier Jan 20 01:24:27.334267 systemd-networkd[1485]: veth821fdd79: Link UP Jan 20 01:24:27.413452 kernel: cni0: port 1(veth821fdd79) entered blocking state Jan 20 01:24:27.418045 kernel: cni0: port 1(veth821fdd79) entered disabled state Jan 20 01:24:27.418085 kernel: veth821fdd79: entered allmulticast mode Jan 20 01:24:27.502374 kernel: veth821fdd79: entered promiscuous mode Jan 20 01:24:27.509612 systemd-networkd[1485]: veth127e736b: Link UP Jan 20 01:24:27.593369 kernel: cni0: port 2(veth127e736b) entered blocking state Jan 20 01:24:27.595726 kernel: cni0: port 2(veth127e736b) entered disabled state Jan 20 01:24:27.595963 kernel: veth127e736b: entered allmulticast mode Jan 20 01:24:27.621706 kernel: veth127e736b: entered promiscuous mode Jan 20 01:24:27.719095 kernel: cni0: port 1(veth821fdd79) entered blocking state Jan 20 01:24:27.721936 kernel: cni0: port 1(veth821fdd79) entered forwarding state Jan 20 01:24:27.777017 systemd-networkd[1485]: veth821fdd79: Gained carrier Jan 20 01:24:27.777367 systemd-networkd[1485]: cni0: Gained carrier Jan 20 01:24:27.796619 kernel: cni0: port 2(veth127e736b) entered blocking state Jan 20 01:24:27.796730 kernel: cni0: port 2(veth127e736b) entered forwarding state Jan 20 01:24:27.797196 systemd-networkd[1485]: veth127e736b: Gained carrier Jan 20 01:24:27.822591 containerd[1591]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} Jan 20 01:24:27.822591 containerd[1591]: delegateAdd: netconf sent to delegate plugin: Jan 20 01:24:27.897035 containerd[1591]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 20 01:24:27.897035 containerd[1591]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000020938), "name":"cbr0", "type":"bridge"} Jan 20 01:24:27.897035 containerd[1591]: delegateAdd: netconf sent to delegate plugin: Jan 20 01:24:28.397210 systemd-networkd[1485]: cni0: Gained IPv6LL Jan 20 01:24:28.925241 systemd-networkd[1485]: veth127e736b: Gained IPv6LL Jan 20 01:24:29.000170 containerd[1591]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T01:24:28.997641558Z" level=info msg="connecting to shim 6aeb71f6a4dc7832622629624995cbbb7c8654435d197019caa31f2129b47f02" address="unix:///run/containerd/s/ba0d181edd0048e87535723e94bc8a7524ae2fb6871e68d1c87d9ab8d41494a5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:24:29.008988 containerd[1591]: time="2026-01-20T01:24:29.007076503Z" level=info msg="connecting to shim 3417daa26537967e0ee14ecbc512d38f7b633b97875ae6fa6098262d1f451d34" address="unix:///run/containerd/s/61cf9990cfbc2aea6b722c2d7f8496163a054a747ba67eb239f69727a4b09a0f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:24:29.365155 systemd-networkd[1485]: veth821fdd79: Gained IPv6LL Jan 20 01:24:30.474997 systemd[1]: Started cri-containerd-3417daa26537967e0ee14ecbc512d38f7b633b97875ae6fa6098262d1f451d34.scope - libcontainer container 3417daa26537967e0ee14ecbc512d38f7b633b97875ae6fa6098262d1f451d34. Jan 20 01:24:30.482946 systemd[1]: Started cri-containerd-6aeb71f6a4dc7832622629624995cbbb7c8654435d197019caa31f2129b47f02.scope - libcontainer container 6aeb71f6a4dc7832622629624995cbbb7c8654435d197019caa31f2129b47f02. Jan 20 01:24:31.535402 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:24:31.710159 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:24:32.927212 containerd[1591]: time="2026-01-20T01:24:32.914583944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kv4qq,Uid:4dabd437-ad8b-43e3-8171-6886c8d73dd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3417daa26537967e0ee14ecbc512d38f7b633b97875ae6fa6098262d1f451d34\"" Jan 20 01:24:33.037019 containerd[1591]: time="2026-01-20T01:24:33.036395595Z" level=info msg="CreateContainer within sandbox \"3417daa26537967e0ee14ecbc512d38f7b633b97875ae6fa6098262d1f451d34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:24:34.018399 containerd[1591]: time="2026-01-20T01:24:34.014115606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5nkf,Uid:60215ed4-4983-49ba-a00f-9c33cd98d1ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aeb71f6a4dc7832622629624995cbbb7c8654435d197019caa31f2129b47f02\"" Jan 20 01:24:34.132150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628093739.mount: Deactivated successfully. Jan 20 01:24:34.295024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053426336.mount: Deactivated successfully. Jan 20 01:24:34.356364 containerd[1591]: time="2026-01-20T01:24:34.321247871Z" level=info msg="Container 90d34cfa708c02d5eb853645cbe868e6cec450bcae5f11a601cf06807c7144fb: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:24:34.786927 containerd[1591]: time="2026-01-20T01:24:34.606733653Z" level=info msg="CreateContainer within sandbox \"3417daa26537967e0ee14ecbc512d38f7b633b97875ae6fa6098262d1f451d34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90d34cfa708c02d5eb853645cbe868e6cec450bcae5f11a601cf06807c7144fb\"" Jan 20 01:24:34.897043 containerd[1591]: time="2026-01-20T01:24:34.877270036Z" level=info msg="StartContainer for \"90d34cfa708c02d5eb853645cbe868e6cec450bcae5f11a601cf06807c7144fb\"" Jan 20 01:24:34.937134 containerd[1591]: time="2026-01-20T01:24:34.930274522Z" level=info msg="CreateContainer within sandbox \"6aeb71f6a4dc7832622629624995cbbb7c8654435d197019caa31f2129b47f02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:24:34.979899 containerd[1591]: time="2026-01-20T01:24:34.975272990Z" level=info msg="connecting to shim 90d34cfa708c02d5eb853645cbe868e6cec450bcae5f11a601cf06807c7144fb" address="unix:///run/containerd/s/61cf9990cfbc2aea6b722c2d7f8496163a054a747ba67eb239f69727a4b09a0f" protocol=ttrpc version=3 Jan 20 01:24:35.359004 containerd[1591]: time="2026-01-20T01:24:35.358742423Z" level=info msg="Container 2de31eaf1ff3a99719081618588eebfc8773fbbd550f93ec428d1ac342f57397: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:24:35.408048 systemd[1]: Started cri-containerd-90d34cfa708c02d5eb853645cbe868e6cec450bcae5f11a601cf06807c7144fb.scope - libcontainer container 90d34cfa708c02d5eb853645cbe868e6cec450bcae5f11a601cf06807c7144fb. Jan 20 01:24:35.831315 containerd[1591]: time="2026-01-20T01:24:35.826280247Z" level=info msg="CreateContainer within sandbox \"6aeb71f6a4dc7832622629624995cbbb7c8654435d197019caa31f2129b47f02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2de31eaf1ff3a99719081618588eebfc8773fbbd550f93ec428d1ac342f57397\"" Jan 20 01:24:35.927303 containerd[1591]: time="2026-01-20T01:24:35.868645956Z" level=info msg="StartContainer for \"2de31eaf1ff3a99719081618588eebfc8773fbbd550f93ec428d1ac342f57397\"" Jan 20 01:24:36.123145 containerd[1591]: time="2026-01-20T01:24:36.118029484Z" level=info msg="connecting to shim 2de31eaf1ff3a99719081618588eebfc8773fbbd550f93ec428d1ac342f57397" address="unix:///run/containerd/s/ba0d181edd0048e87535723e94bc8a7524ae2fb6871e68d1c87d9ab8d41494a5" protocol=ttrpc version=3 Jan 20 01:24:37.714082 systemd[1]: Started cri-containerd-2de31eaf1ff3a99719081618588eebfc8773fbbd550f93ec428d1ac342f57397.scope - libcontainer container 2de31eaf1ff3a99719081618588eebfc8773fbbd550f93ec428d1ac342f57397. Jan 20 01:24:38.175067 containerd[1591]: time="2026-01-20T01:24:38.122341967Z" level=info msg="StartContainer for \"90d34cfa708c02d5eb853645cbe868e6cec450bcae5f11a601cf06807c7144fb\" returns successfully" Jan 20 01:24:39.197030 kubelet[3024]: I0120 01:24:39.172422 3024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kv4qq" podStartSLOduration=165.172395959 podStartE2EDuration="2m45.172395959s" podCreationTimestamp="2026-01-20 01:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:24:39.160203962 +0000 UTC m=+170.961942906" watchObservedRunningTime="2026-01-20 01:24:39.172395959 +0000 UTC m=+170.974134853" Jan 20 01:24:39.455968 containerd[1591]: time="2026-01-20T01:24:39.453428353Z" level=info msg="StartContainer for \"2de31eaf1ff3a99719081618588eebfc8773fbbd550f93ec428d1ac342f57397\" returns successfully" Jan 20 01:24:40.276003 kubelet[3024]: I0120 01:24:40.275918 3024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z5nkf" podStartSLOduration=166.275668096 podStartE2EDuration="2m46.275668096s" podCreationTimestamp="2026-01-20 01:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:24:40.236714839 +0000 UTC m=+172.038453734" watchObservedRunningTime="2026-01-20 01:24:40.275668096 +0000 UTC m=+172.077407000" Jan 20 01:27:10.153246 containerd[1591]: time="2026-01-20T01:27:10.138654189Z" level=warning msg="container event discarded" container=147297391f352d5cadb2fe9a13bf1a2953df669a1d355d654117813d70373e6c type=CONTAINER_CREATED_EVENT Jan 20 01:27:10.153246 containerd[1591]: time="2026-01-20T01:27:10.153093650Z" level=warning msg="container event discarded" container=147297391f352d5cadb2fe9a13bf1a2953df669a1d355d654117813d70373e6c type=CONTAINER_STARTED_EVENT Jan 20 01:27:12.222182 containerd[1591]: time="2026-01-20T01:27:12.221504270Z" level=warning msg="container event discarded" container=2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0 type=CONTAINER_CREATED_EVENT Jan 20 01:27:17.106913 containerd[1591]: time="2026-01-20T01:27:17.074247680Z" level=warning msg="container event discarded" container=2a04d7bed41df32a3c6f24452d8936143f6338f91b4bc7b8cd36d99f310958d0 type=CONTAINER_STARTED_EVENT Jan 20 01:27:22.148597 containerd[1591]: time="2026-01-20T01:27:22.135320545Z" level=warning msg="container event discarded" container=483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8 type=CONTAINER_CREATED_EVENT Jan 20 01:27:22.148597 containerd[1591]: time="2026-01-20T01:27:22.145313275Z" level=warning msg="container event discarded" container=483a682fc1cf0a83d0837ab8f3e0ca6aa7f2856bf8f3f5723c1a42082ecb59d8 type=CONTAINER_STARTED_EVENT Jan 20 01:27:31.701149 containerd[1591]: time="2026-01-20T01:27:31.696165130Z" level=warning msg="container event discarded" container=53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417 type=CONTAINER_CREATED_EVENT Jan 20 01:27:33.763324 containerd[1591]: time="2026-01-20T01:27:33.710025801Z" level=warning msg="container event discarded" container=53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417 type=CONTAINER_STARTED_EVENT Jan 20 01:27:40.590317 containerd[1591]: time="2026-01-20T01:27:40.553058710Z" level=warning msg="container event discarded" container=53532d3b547934298809c0da13281dccb4c87d9ce83faecad25ee7cecfdf1417 type=CONTAINER_STOPPED_EVENT Jan 20 01:27:51.869029 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:53060.service - OpenSSH per-connection server daemon (10.0.0.1:53060). Jan 20 01:27:54.468190 kubelet[3024]: E0120 01:27:54.434722 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.626s" Jan 20 01:27:56.042069 sshd[4715]: Accepted publickey for core from 10.0.0.1 port 53060 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:27:56.086169 sshd-session[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:27:56.180264 systemd-logind[1554]: New session 6 of user core. Jan 20 01:27:56.289583 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:28:00.238996 sshd[4743]: Connection closed by 10.0.0.1 port 53060 Jan 20 01:28:00.257420 sshd-session[4715]: pam_unix(sshd:session): session closed for user core Jan 20 01:28:00.415449 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:53060.service: Deactivated successfully. Jan 20 01:28:00.529115 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:28:00.577980 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:28:00.613061 systemd-logind[1554]: Removed session 6. Jan 20 01:28:04.880888 containerd[1591]: time="2026-01-20T01:28:04.878193447Z" level=warning msg="container event discarded" container=1910124b172dcc6377dad855e819810788952f41edba3351e1ae783ac45181c2 type=CONTAINER_STOPPED_EVENT Jan 20 01:28:05.410247 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:53336.service - OpenSSH per-connection server daemon (10.0.0.1:53336). Jan 20 01:28:05.421580 containerd[1591]: time="2026-01-20T01:28:05.413008402Z" level=warning msg="container event discarded" container=1a9df98e713f298198b440ea0cd69ef83f215c7eda2734e9d0d15dbff78b5ee7 type=CONTAINER_STOPPED_EVENT Jan 20 01:28:06.496411 sshd[4781]: Accepted publickey for core from 10.0.0.1 port 53336 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:28:06.497421 containerd[1591]: time="2026-01-20T01:28:06.493369082Z" level=warning msg="container event discarded" container=9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0 type=CONTAINER_CREATED_EVENT Jan 20 01:28:06.518013 sshd-session[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:28:06.730231 systemd-logind[1554]: New session 7 of user core. Jan 20 01:28:06.814132 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:28:07.147929 containerd[1591]: time="2026-01-20T01:28:07.119931027Z" level=warning msg="container event discarded" container=224668e1b6b230986e4901bdfa18acd1eedb68ae669fb2291786078eceb4260d type=CONTAINER_CREATED_EVENT Jan 20 01:28:10.033106 sshd[4793]: Connection closed by 10.0.0.1 port 53336 Jan 20 01:28:10.047970 sshd-session[4781]: pam_unix(sshd:session): session closed for user core Jan 20 01:28:10.109680 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:53336.service: Deactivated successfully. Jan 20 01:28:10.131058 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:28:10.195543 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:28:10.261146 systemd-logind[1554]: Removed session 7. Jan 20 01:28:13.663125 containerd[1591]: time="2026-01-20T01:28:13.663028207Z" level=warning msg="container event discarded" container=9817170ea74088ae4fffd16b5aedc639811f13f1cddce0365e6dad07a72f83f0 type=CONTAINER_STARTED_EVENT Jan 20 01:28:14.570136 containerd[1591]: time="2026-01-20T01:28:14.569284703Z" level=warning msg="container event discarded" container=224668e1b6b230986e4901bdfa18acd1eedb68ae669fb2291786078eceb4260d type=CONTAINER_STARTED_EVENT Jan 20 01:28:15.169717 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:44594.service - OpenSSH per-connection server daemon (10.0.0.1:44594). Jan 20 01:28:15.904278 sshd[4840]: Accepted publickey for core from 10.0.0.1 port 44594 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:28:15.920635 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:28:16.018956 systemd-logind[1554]: New session 8 of user core. Jan 20 01:28:16.060179 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:28:18.108135 sshd[4843]: Connection closed by 10.0.0.1 port 44594 Jan 20 01:28:18.087510 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Jan 20 01:28:18.149699 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:28:18.166104 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:44594.service: Deactivated successfully. Jan 20 01:28:18.205337 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:28:18.269541 systemd-logind[1554]: Removed session 8. Jan 20 01:28:23.189295 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:44596.service - OpenSSH per-connection server daemon (10.0.0.1:44596). Jan 20 01:28:24.715043 sshd[4889]: Accepted publickey for core from 10.0.0.1 port 44596 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:28:24.764056 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:28:24.902246 systemd-logind[1554]: New session 9 of user core. Jan 20 01:28:24.947236 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:28:27.402449 sshd[4894]: Connection closed by 10.0.0.1 port 44596 Jan 20 01:28:27.419319 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Jan 20 01:28:27.458535 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:44596.service: Deactivated successfully. Jan 20 01:28:27.485470 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:28:27.503909 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:28:27.522635 systemd-logind[1554]: Removed session 9. Jan 20 01:28:33.201117 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:53794.service - OpenSSH per-connection server daemon (10.0.0.1:53794). Jan 20 01:28:34.306154 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 53794 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:28:34.328449 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:28:34.375549 systemd-logind[1554]: New session 10 of user core. Jan 20 01:28:34.414386 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:28:35.731043 sshd[4954]: Connection closed by 10.0.0.1 port 53794 Jan 20 01:28:35.743188 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Jan 20 01:28:35.812101 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:53794.service: Deactivated successfully. Jan 20 01:28:35.817208 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:28:35.877916 systemd-logind[1554]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:28:35.901264 systemd-logind[1554]: Removed session 10. Jan 20 01:28:40.836034 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:59512.service - OpenSSH per-connection server daemon (10.0.0.1:59512). Jan 20 01:28:41.628948 sshd[4990]: Accepted publickey for core from 10.0.0.1 port 59512 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:28:41.661513 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:28:41.732169 systemd-logind[1554]: New session 11 of user core. Jan 20 01:28:41.802550 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:28:48.263489 sshd[4993]: Connection closed by 10.0.0.1 port 59512 Jan 20 01:28:48.264700 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Jan 20 01:28:48.392074 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:59512.service: Deactivated successfully. Jan 20 01:28:48.450490 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:28:48.451081 systemd[1]: session-11.scope: Consumed 1.001s CPU time, 17.4M memory peak. Jan 20 01:28:48.525590 systemd-logind[1554]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:28:48.586581 systemd-logind[1554]: Removed session 11. Jan 20 01:28:53.427466 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:37688.service - OpenSSH per-connection server daemon (10.0.0.1:37688). Jan 20 01:28:54.625437 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 37688 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:28:54.658576 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:28:54.797019 systemd-logind[1554]: New session 12 of user core. Jan 20 01:28:54.892538 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:28:56.592609 sshd[5057]: Connection closed by 10.0.0.1 port 37688 Jan 20 01:28:56.595058 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Jan 20 01:28:56.645376 systemd-logind[1554]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:28:56.654617 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:37688.service: Deactivated successfully. Jan 20 01:28:56.668424 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:28:56.707326 systemd-logind[1554]: Removed session 12. Jan 20 01:29:01.987346 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:37730.service - OpenSSH per-connection server daemon (10.0.0.1:37730). Jan 20 01:29:03.409727 sshd[5093]: Accepted publickey for core from 10.0.0.1 port 37730 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:03.420420 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:03.578926 systemd-logind[1554]: New session 13 of user core. Jan 20 01:29:03.621922 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:29:05.128300 sshd[5096]: Connection closed by 10.0.0.1 port 37730 Jan 20 01:29:05.127620 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:05.208699 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:37730.service: Deactivated successfully. Jan 20 01:29:05.286466 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:29:05.329557 systemd-logind[1554]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:29:05.365945 systemd-logind[1554]: Removed session 13. Jan 20 01:29:10.103263 containerd[1591]: time="2026-01-20T01:29:10.102644208Z" level=warning msg="container event discarded" container=95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f type=CONTAINER_CREATED_EVENT Jan 20 01:29:10.273398 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:50694.service - OpenSSH per-connection server daemon (10.0.0.1:50694). Jan 20 01:29:11.027651 sshd[5138]: Accepted publickey for core from 10.0.0.1 port 50694 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:11.073507 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:11.216337 systemd-logind[1554]: New session 14 of user core. Jan 20 01:29:11.288475 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:29:14.574948 sshd[5153]: Connection closed by 10.0.0.1 port 50694 Jan 20 01:29:14.606330 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:14.628007 containerd[1591]: time="2026-01-20T01:29:14.627735487Z" level=warning msg="container event discarded" container=95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f type=CONTAINER_STARTED_EVENT Jan 20 01:29:14.703314 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:50694.service: Deactivated successfully. Jan 20 01:29:14.730381 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:29:14.770707 systemd-logind[1554]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:29:14.785981 systemd-logind[1554]: Removed session 14. Jan 20 01:29:15.966554 containerd[1591]: time="2026-01-20T01:29:15.942397936Z" level=warning msg="container event discarded" container=95937e6c26ef382e0ef9dd4de5d42b98e596deea51f51513a3dd044e12d0f68f type=CONTAINER_STOPPED_EVENT Jan 20 01:29:17.371568 containerd[1591]: time="2026-01-20T01:29:17.369496254Z" level=warning msg="container event discarded" container=e9dab379d936c18e75d843bcbf0cb34c81a5547edeefe66ea04660a0330b033d type=CONTAINER_CREATED_EVENT Jan 20 01:29:18.628521 containerd[1591]: time="2026-01-20T01:29:18.625703041Z" level=warning msg="container event discarded" container=e9dab379d936c18e75d843bcbf0cb34c81a5547edeefe66ea04660a0330b033d type=CONTAINER_STARTED_EVENT Jan 20 01:29:19.796609 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:55194.service - OpenSSH per-connection server daemon (10.0.0.1:55194). Jan 20 01:29:21.357475 sshd[5197]: Accepted publickey for core from 10.0.0.1 port 55194 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:21.394743 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:21.504315 systemd-logind[1554]: New session 15 of user core. Jan 20 01:29:21.593345 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:29:23.584328 sshd[5207]: Connection closed by 10.0.0.1 port 55194 Jan 20 01:29:23.588572 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:23.631655 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:55194.service: Deactivated successfully. Jan 20 01:29:23.656715 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:29:23.686232 systemd-logind[1554]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:29:23.734508 systemd-logind[1554]: Removed session 15. Jan 20 01:29:28.735106 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:58592.service - OpenSSH per-connection server daemon (10.0.0.1:58592). Jan 20 01:29:29.870881 sshd[5244]: Accepted publickey for core from 10.0.0.1 port 58592 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:29.864682 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:29.980590 systemd-logind[1554]: New session 16 of user core. Jan 20 01:29:30.092443 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:29:32.094427 sshd[5256]: Connection closed by 10.0.0.1 port 58592 Jan 20 01:29:32.112662 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:32.266351 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:58592.service: Deactivated successfully. Jan 20 01:29:32.333686 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:29:32.402532 systemd-logind[1554]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:29:32.472394 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:58594.service - OpenSSH per-connection server daemon (10.0.0.1:58594). Jan 20 01:29:32.504744 systemd-logind[1554]: Removed session 16. Jan 20 01:29:32.955463 containerd[1591]: time="2026-01-20T01:29:32.934197766Z" level=warning msg="container event discarded" container=3417daa26537967e0ee14ecbc512d38f7b633b97875ae6fa6098262d1f451d34 type=CONTAINER_CREATED_EVENT Jan 20 01:29:32.955463 containerd[1591]: time="2026-01-20T01:29:32.934256314Z" level=warning msg="container event discarded" container=3417daa26537967e0ee14ecbc512d38f7b633b97875ae6fa6098262d1f451d34 type=CONTAINER_STARTED_EVENT Jan 20 01:29:34.698068 containerd[1591]: time="2026-01-20T01:29:34.458515874Z" level=warning msg="container event discarded" container=6aeb71f6a4dc7832622629624995cbbb7c8654435d197019caa31f2129b47f02 type=CONTAINER_CREATED_EVENT Jan 20 01:29:34.698068 containerd[1591]: time="2026-01-20T01:29:34.681020945Z" level=warning msg="container event discarded" container=6aeb71f6a4dc7832622629624995cbbb7c8654435d197019caa31f2129b47f02 type=CONTAINER_STARTED_EVENT Jan 20 01:29:34.865243 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 58594 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:34.866159 containerd[1591]: time="2026-01-20T01:29:34.864335072Z" level=warning msg="container event discarded" container=90d34cfa708c02d5eb853645cbe868e6cec450bcae5f11a601cf06807c7144fb type=CONTAINER_CREATED_EVENT Jan 20 01:29:34.862377 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:35.024440 systemd-logind[1554]: New session 17 of user core. Jan 20 01:29:35.115534 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:29:35.886369 containerd[1591]: time="2026-01-20T01:29:35.886306832Z" level=warning msg="container event discarded" container=2de31eaf1ff3a99719081618588eebfc8773fbbd550f93ec428d1ac342f57397 type=CONTAINER_CREATED_EVENT Jan 20 01:29:38.123090 containerd[1591]: time="2026-01-20T01:29:38.122599658Z" level=warning msg="container event discarded" container=90d34cfa708c02d5eb853645cbe868e6cec450bcae5f11a601cf06807c7144fb type=CONTAINER_STARTED_EVENT Jan 20 01:29:38.955268 kubelet[3024]: E0120 01:29:38.955207 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:39.437453 containerd[1591]: time="2026-01-20T01:29:39.437256622Z" level=warning msg="container event discarded" container=2de31eaf1ff3a99719081618588eebfc8773fbbd550f93ec428d1ac342f57397 type=CONTAINER_STARTED_EVENT Jan 20 01:29:39.750312 kubelet[3024]: E0120 01:29:39.747444 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:39.954478 sshd[5285]: Connection closed by 10.0.0.1 port 58594 Jan 20 01:29:40.042713 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:40.154305 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:47060.service - OpenSSH per-connection server daemon (10.0.0.1:47060). Jan 20 01:29:40.170216 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:58594.service: Deactivated successfully. Jan 20 01:29:40.226732 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:29:40.288478 systemd-logind[1554]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:29:40.425702 systemd-logind[1554]: Removed session 17. Jan 20 01:29:41.649123 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 47060 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:41.654248 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:41.766278 systemd-logind[1554]: New session 18 of user core. Jan 20 01:29:41.782507 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:29:45.019561 sshd[5325]: Connection closed by 10.0.0.1 port 47060 Jan 20 01:29:45.027547 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:45.122311 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:47060.service: Deactivated successfully. Jan 20 01:29:45.187351 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:29:45.221239 systemd-logind[1554]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:29:45.328181 systemd-logind[1554]: Removed session 18. Jan 20 01:29:50.304301 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:40518.service - OpenSSH per-connection server daemon (10.0.0.1:40518). Jan 20 01:29:50.813714 kubelet[3024]: E0120 01:29:50.813662 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:52.425999 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 40518 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:52.478497 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:52.710125 systemd-logind[1554]: New session 19 of user core. Jan 20 01:29:52.775397 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:29:54.149008 sshd[5383]: Connection closed by 10.0.0.1 port 40518 Jan 20 01:29:54.146644 sshd-session[5366]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:54.193518 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:40518.service: Deactivated successfully. Jan 20 01:29:54.229439 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:29:54.261331 systemd-logind[1554]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:29:54.305283 systemd-logind[1554]: Removed session 19. Jan 20 01:29:55.767501 kubelet[3024]: E0120 01:29:55.764489 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:29:59.393092 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:40126.service - OpenSSH per-connection server daemon (10.0.0.1:40126). Jan 20 01:30:00.532120 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 40126 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:00.578400 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:00.730120 systemd-logind[1554]: New session 20 of user core. Jan 20 01:30:00.789586 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:30:04.113324 sshd[5425]: Connection closed by 10.0.0.1 port 40126 Jan 20 01:30:04.128225 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:04.216182 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:40126.service: Deactivated successfully. Jan 20 01:30:04.317101 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:30:04.340551 systemd-logind[1554]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:30:04.478341 systemd-logind[1554]: Removed session 20. Jan 20 01:30:09.306357 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:54938.service - OpenSSH per-connection server daemon (10.0.0.1:54938). Jan 20 01:30:10.052589 sshd[5465]: Accepted publickey for core from 10.0.0.1 port 54938 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:10.094731 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:10.182616 systemd-logind[1554]: New session 21 of user core. Jan 20 01:30:10.235299 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:30:10.852302 kubelet[3024]: E0120 01:30:10.851205 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:30:12.268960 sshd[5483]: Connection closed by 10.0.0.1 port 54938 Jan 20 01:30:12.272220 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:12.383222 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:54938.service: Deactivated successfully. Jan 20 01:30:12.442301 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:30:12.480361 systemd-logind[1554]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:30:12.524513 systemd-logind[1554]: Removed session 21. Jan 20 01:30:15.013005 kubelet[3024]: E0120 01:30:15.005109 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:30:17.526322 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:46342.service - OpenSSH per-connection server daemon (10.0.0.1:46342). Jan 20 01:30:18.401218 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 46342 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:18.429457 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:18.760369 systemd-logind[1554]: New session 22 of user core. Jan 20 01:30:18.872139 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:30:19.889886 sshd[5527]: Connection closed by 10.0.0.1 port 46342 Jan 20 01:30:19.908500 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:20.686501 systemd-logind[1554]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:30:20.689106 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:46342.service: Deactivated successfully. Jan 20 01:30:20.701480 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:30:20.713389 systemd-logind[1554]: Removed session 22. Jan 20 01:30:24.965494 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:36852.service - OpenSSH per-connection server daemon (10.0.0.1:36852). Jan 20 01:30:25.380698 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 36852 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:25.397493 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:25.490136 systemd-logind[1554]: New session 23 of user core. Jan 20 01:30:25.560904 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:30:27.177446 sshd[5567]: Connection closed by 10.0.0.1 port 36852 Jan 20 01:30:27.231462 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:27.311010 systemd-logind[1554]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:30:27.319510 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:36852.service: Deactivated successfully. Jan 20 01:30:27.420253 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:30:27.500216 systemd-logind[1554]: Removed session 23. Jan 20 01:30:32.426741 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:36862.service - OpenSSH per-connection server daemon (10.0.0.1:36862). Jan 20 01:30:33.206049 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 36862 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:33.240725 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:33.389608 systemd-logind[1554]: New session 24 of user core. Jan 20 01:30:33.484380 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:30:35.081977 sshd[5621]: Connection closed by 10.0.0.1 port 36862 Jan 20 01:30:35.079234 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:35.114582 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:36862.service: Deactivated successfully. Jan 20 01:30:35.160571 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:30:35.183703 systemd-logind[1554]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:30:35.286186 systemd-logind[1554]: Removed session 24. Jan 20 01:30:39.564888 kubelet[3024]: E0120 01:30:39.532307 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:30:40.431571 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:37620.service - OpenSSH per-connection server daemon (10.0.0.1:37620). Jan 20 01:30:41.560030 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 37620 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:41.574334 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:41.665032 systemd-logind[1554]: New session 25 of user core. Jan 20 01:30:41.727729 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:30:43.647911 sshd[5663]: Connection closed by 10.0.0.1 port 37620 Jan 20 01:30:43.663234 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:43.790063 systemd-logind[1554]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:30:43.796683 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:37620.service: Deactivated successfully. Jan 20 01:30:43.832284 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:30:43.955134 systemd-logind[1554]: Removed session 25. Jan 20 01:30:49.200331 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:42622.service - OpenSSH per-connection server daemon (10.0.0.1:42622). Jan 20 01:30:50.082747 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 42622 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:50.117333 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:30:50.252913 systemd-logind[1554]: New session 26 of user core. Jan 20 01:30:50.267590 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:30:52.348244 sshd[5700]: Connection closed by 10.0.0.1 port 42622 Jan 20 01:30:52.352291 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Jan 20 01:30:52.477048 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:42622.service: Deactivated successfully. Jan 20 01:30:52.563336 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:30:52.606532 systemd-logind[1554]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:30:52.674536 systemd-logind[1554]: Removed session 26. Jan 20 01:30:57.560038 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:55250.service - OpenSSH per-connection server daemon (10.0.0.1:55250). Jan 20 01:30:59.798654 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 55250 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:30:59.872027 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:00.070680 systemd-logind[1554]: New session 27 of user core. Jan 20 01:31:00.118513 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:31:00.811616 kubelet[3024]: E0120 01:31:00.794701 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:03.424965 sshd[5764]: Connection closed by 10.0.0.1 port 55250 Jan 20 01:31:03.442462 sshd-session[5748]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:03.663589 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:55250.service: Deactivated successfully. Jan 20 01:31:03.689249 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:31:03.974183 systemd-logind[1554]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:31:04.006010 systemd[1]: Started sshd@28-10.0.0.13:22-10.0.0.1:55264.service - OpenSSH per-connection server daemon (10.0.0.1:55264). Jan 20 01:31:04.032071 systemd-logind[1554]: Removed session 27. Jan 20 01:31:04.551165 sshd[5789]: Accepted publickey for core from 10.0.0.1 port 55264 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:04.582511 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:04.685970 systemd-logind[1554]: New session 28 of user core. Jan 20 01:31:04.717055 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 01:31:07.744078 kubelet[3024]: E0120 01:31:07.740930 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:09.105101 sshd[5802]: Connection closed by 10.0.0.1 port 55264 Jan 20 01:31:09.101627 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:09.188587 systemd[1]: sshd@28-10.0.0.13:22-10.0.0.1:55264.service: Deactivated successfully. Jan 20 01:31:09.215489 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 01:31:09.235514 systemd-logind[1554]: Session 28 logged out. Waiting for processes to exit. Jan 20 01:31:09.278718 systemd[1]: Started sshd@29-10.0.0.13:22-10.0.0.1:34540.service - OpenSSH per-connection server daemon (10.0.0.1:34540). Jan 20 01:31:09.285369 systemd-logind[1554]: Removed session 28. Jan 20 01:31:09.690069 sshd[5820]: Accepted publickey for core from 10.0.0.1 port 34540 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:09.711665 sshd-session[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:09.825905 systemd-logind[1554]: New session 29 of user core. Jan 20 01:31:10.152730 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 01:31:15.971033 sshd[5838]: Connection closed by 10.0.0.1 port 34540 Jan 20 01:31:15.983047 sshd-session[5820]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:16.072098 systemd[1]: Started sshd@30-10.0.0.13:22-10.0.0.1:33814.service - OpenSSH per-connection server daemon (10.0.0.1:33814). Jan 20 01:31:16.119689 systemd[1]: sshd@29-10.0.0.13:22-10.0.0.1:34540.service: Deactivated successfully. Jan 20 01:31:16.139540 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 01:31:16.144749 systemd[1]: session-29.scope: Consumed 1.544s CPU time, 45.9M memory peak. Jan 20 01:31:16.218021 systemd-logind[1554]: Session 29 logged out. Waiting for processes to exit. Jan 20 01:31:16.298101 systemd-logind[1554]: Removed session 29. Jan 20 01:31:17.168327 sshd[5876]: Accepted publickey for core from 10.0.0.1 port 33814 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:17.196147 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:17.378446 systemd-logind[1554]: New session 30 of user core. Jan 20 01:31:17.451575 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 01:31:17.768365 kubelet[3024]: E0120 01:31:17.764967 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:21.381515 sshd[5886]: Connection closed by 10.0.0.1 port 33814 Jan 20 01:31:21.383189 sshd-session[5876]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:21.610522 systemd[1]: sshd@30-10.0.0.13:22-10.0.0.1:33814.service: Deactivated successfully. Jan 20 01:31:21.759567 kubelet[3024]: E0120 01:31:21.747050 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:21.823915 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 01:31:22.187058 systemd-logind[1554]: Session 30 logged out. Waiting for processes to exit. Jan 20 01:31:22.334036 systemd[1]: Started sshd@31-10.0.0.13:22-10.0.0.1:33822.service - OpenSSH per-connection server daemon (10.0.0.1:33822). Jan 20 01:31:22.480071 systemd-logind[1554]: Removed session 30. Jan 20 01:31:23.994383 sshd[5919]: Accepted publickey for core from 10.0.0.1 port 33822 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:24.048553 sshd-session[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:24.216088 systemd-logind[1554]: New session 31 of user core. Jan 20 01:31:24.300300 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 01:31:24.743578 kubelet[3024]: E0120 01:31:24.740994 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:25.962615 sshd[5930]: Connection closed by 10.0.0.1 port 33822 Jan 20 01:31:25.961267 sshd-session[5919]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:26.031599 systemd[1]: sshd@31-10.0.0.13:22-10.0.0.1:33822.service: Deactivated successfully. Jan 20 01:31:26.079125 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 01:31:26.107740 systemd-logind[1554]: Session 31 logged out. Waiting for processes to exit. Jan 20 01:31:26.122619 systemd-logind[1554]: Removed session 31. Jan 20 01:31:31.093073 systemd[1]: Started sshd@32-10.0.0.13:22-10.0.0.1:35608.service - OpenSSH per-connection server daemon (10.0.0.1:35608). Jan 20 01:31:31.978390 sshd[5965]: Accepted publickey for core from 10.0.0.1 port 35608 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:32.021075 sshd-session[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:32.186013 systemd-logind[1554]: New session 32 of user core. Jan 20 01:31:32.282009 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 01:31:34.815450 kubelet[3024]: E0120 01:31:34.815401 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:34.882543 sshd[5968]: Connection closed by 10.0.0.1 port 35608 Jan 20 01:31:34.875684 sshd-session[5965]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:34.965545 systemd[1]: sshd@32-10.0.0.13:22-10.0.0.1:35608.service: Deactivated successfully. Jan 20 01:31:35.003304 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 01:31:35.077703 systemd-logind[1554]: Session 32 logged out. Waiting for processes to exit. Jan 20 01:31:35.122243 systemd-logind[1554]: Removed session 32. Jan 20 01:31:48.602474 systemd[1]: Started sshd@33-10.0.0.13:22-10.0.0.1:54708.service - OpenSSH per-connection server daemon (10.0.0.1:54708). Jan 20 01:31:54.689537 kubelet[3024]: E0120 01:31:54.571155 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.757s" Jan 20 01:31:55.034288 kubelet[3024]: E0120 01:31:55.015472 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:31:55.701316 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 54708 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:31:55.884963 sshd-session[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:31:56.120738 systemd-logind[1554]: New session 33 of user core. Jan 20 01:31:56.198601 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 01:31:58.253606 sshd[6030]: Connection closed by 10.0.0.1 port 54708 Jan 20 01:31:58.257521 sshd-session[6006]: pam_unix(sshd:session): session closed for user core Jan 20 01:31:58.402547 systemd[1]: sshd@33-10.0.0.13:22-10.0.0.1:54708.service: Deactivated successfully. Jan 20 01:31:58.542710 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 01:31:58.648160 systemd-logind[1554]: Session 33 logged out. Waiting for processes to exit. Jan 20 01:31:58.759524 systemd-logind[1554]: Removed session 33. Jan 20 01:32:03.570308 systemd[1]: Started sshd@34-10.0.0.13:22-10.0.0.1:43142.service - OpenSSH per-connection server daemon (10.0.0.1:43142). Jan 20 01:32:05.186640 sshd[6065]: Accepted publickey for core from 10.0.0.1 port 43142 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:05.206719 sshd-session[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:05.285639 systemd-logind[1554]: New session 34 of user core. Jan 20 01:32:05.342377 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 01:32:05.768741 kubelet[3024]: E0120 01:32:05.762046 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:07.103727 sshd[6074]: Connection closed by 10.0.0.1 port 43142 Jan 20 01:32:07.114549 sshd-session[6065]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:07.357687 systemd[1]: sshd@34-10.0.0.13:22-10.0.0.1:43142.service: Deactivated successfully. Jan 20 01:32:07.455406 systemd-logind[1554]: Session 34 logged out. Waiting for processes to exit. Jan 20 01:32:07.496385 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 01:32:07.599540 systemd-logind[1554]: Removed session 34. Jan 20 01:32:12.212185 systemd[1]: Started sshd@35-10.0.0.13:22-10.0.0.1:34664.service - OpenSSH per-connection server daemon (10.0.0.1:34664). Jan 20 01:32:12.847434 sshd[6110]: Accepted publickey for core from 10.0.0.1 port 34664 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:12.882490 sshd-session[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:12.985256 systemd-logind[1554]: New session 35 of user core. Jan 20 01:32:13.016744 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 01:32:14.370416 sshd[6113]: Connection closed by 10.0.0.1 port 34664 Jan 20 01:32:14.371325 sshd-session[6110]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:14.426324 systemd[1]: sshd@35-10.0.0.13:22-10.0.0.1:34664.service: Deactivated successfully. Jan 20 01:32:14.454477 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 01:32:14.480289 systemd-logind[1554]: Session 35 logged out. Waiting for processes to exit. Jan 20 01:32:14.489682 systemd-logind[1554]: Removed session 35. Jan 20 01:32:23.601033 kubelet[3024]: E0120 01:32:23.595393 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:23.630403 systemd[1]: Started sshd@36-10.0.0.13:22-10.0.0.1:33792.service - OpenSSH per-connection server daemon (10.0.0.1:33792). Jan 20 01:32:24.792206 sshd[6147]: Accepted publickey for core from 10.0.0.1 port 33792 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:24.823650 sshd-session[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:24.988083 systemd-logind[1554]: New session 36 of user core. Jan 20 01:32:25.023424 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 01:32:26.208283 sshd[6173]: Connection closed by 10.0.0.1 port 33792 Jan 20 01:32:26.216643 sshd-session[6147]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:26.275074 systemd-logind[1554]: Session 36 logged out. Waiting for processes to exit. Jan 20 01:32:26.278514 systemd[1]: sshd@36-10.0.0.13:22-10.0.0.1:33792.service: Deactivated successfully. Jan 20 01:32:26.299137 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 01:32:26.362511 systemd-logind[1554]: Removed session 36. Jan 20 01:32:26.743227 kubelet[3024]: E0120 01:32:26.735255 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:31.396428 systemd[1]: Started sshd@37-10.0.0.13:22-10.0.0.1:56564.service - OpenSSH per-connection server daemon (10.0.0.1:56564). Jan 20 01:32:32.019546 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 56564 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:32.031639 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:32.131977 systemd-logind[1554]: New session 37 of user core. Jan 20 01:32:32.171373 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 01:32:33.154293 sshd[6210]: Connection closed by 10.0.0.1 port 56564 Jan 20 01:32:33.158378 sshd-session[6207]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:33.186691 systemd[1]: sshd@37-10.0.0.13:22-10.0.0.1:56564.service: Deactivated successfully. Jan 20 01:32:33.216655 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 01:32:33.230690 systemd-logind[1554]: Session 37 logged out. Waiting for processes to exit. Jan 20 01:32:33.262445 systemd-logind[1554]: Removed session 37. Jan 20 01:32:38.364888 systemd[1]: Started sshd@38-10.0.0.13:22-10.0.0.1:51798.service - OpenSSH per-connection server daemon (10.0.0.1:51798). Jan 20 01:32:39.240539 sshd[6244]: Accepted publickey for core from 10.0.0.1 port 51798 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:39.262016 sshd-session[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:39.405043 systemd-logind[1554]: New session 38 of user core. Jan 20 01:32:39.429069 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 01:32:39.752981 kubelet[3024]: E0120 01:32:39.734254 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:40.722163 sshd[6247]: Connection closed by 10.0.0.1 port 51798 Jan 20 01:32:40.724558 sshd-session[6244]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:40.784160 systemd[1]: sshd@38-10.0.0.13:22-10.0.0.1:51798.service: Deactivated successfully. Jan 20 01:32:40.823313 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 01:32:40.859006 systemd-logind[1554]: Session 38 logged out. Waiting for processes to exit. Jan 20 01:32:40.890629 systemd-logind[1554]: Removed session 38. Jan 20 01:32:41.764280 kubelet[3024]: E0120 01:32:41.762004 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:45.847734 systemd[1]: Started sshd@39-10.0.0.13:22-10.0.0.1:36116.service - OpenSSH per-connection server daemon (10.0.0.1:36116). Jan 20 01:32:46.852060 sshd[6288]: Accepted publickey for core from 10.0.0.1 port 36116 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:46.922205 sshd-session[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:47.084480 systemd-logind[1554]: New session 39 of user core. Jan 20 01:32:47.176589 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 01:32:47.752906 kubelet[3024]: E0120 01:32:47.752432 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:32:48.694095 sshd[6301]: Connection closed by 10.0.0.1 port 36116 Jan 20 01:32:48.700198 sshd-session[6288]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:48.794370 systemd[1]: sshd@39-10.0.0.13:22-10.0.0.1:36116.service: Deactivated successfully. Jan 20 01:32:48.804366 systemd-logind[1554]: Session 39 logged out. Waiting for processes to exit. Jan 20 01:32:48.864450 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 01:32:48.974296 systemd-logind[1554]: Removed session 39. Jan 20 01:32:53.983717 systemd[1]: Started sshd@40-10.0.0.13:22-10.0.0.1:36118.service - OpenSSH per-connection server daemon (10.0.0.1:36118). Jan 20 01:32:54.869261 sshd[6342]: Accepted publickey for core from 10.0.0.1 port 36118 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:32:54.904283 sshd-session[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:32:55.006424 systemd-logind[1554]: New session 40 of user core. Jan 20 01:32:55.093383 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 01:32:57.493904 sshd[6347]: Connection closed by 10.0.0.1 port 36118 Jan 20 01:32:57.481999 sshd-session[6342]: pam_unix(sshd:session): session closed for user core Jan 20 01:32:57.564166 systemd[1]: sshd@40-10.0.0.13:22-10.0.0.1:36118.service: Deactivated successfully. Jan 20 01:32:57.601363 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 01:32:57.623964 systemd-logind[1554]: Session 40 logged out. Waiting for processes to exit. Jan 20 01:32:57.676989 systemd-logind[1554]: Removed session 40. Jan 20 01:32:57.769865 kubelet[3024]: E0120 01:32:57.754388 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:02.570445 systemd[1]: Started sshd@41-10.0.0.13:22-10.0.0.1:56958.service - OpenSSH per-connection server daemon (10.0.0.1:56958). Jan 20 01:33:03.176969 sshd[6388]: Accepted publickey for core from 10.0.0.1 port 56958 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:03.200714 sshd-session[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:03.314027 systemd-logind[1554]: New session 41 of user core. Jan 20 01:33:03.324020 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 01:33:05.943271 sshd[6391]: Connection closed by 10.0.0.1 port 56958 Jan 20 01:33:05.961715 sshd-session[6388]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:06.007289 systemd[1]: sshd@41-10.0.0.13:22-10.0.0.1:56958.service: Deactivated successfully. Jan 20 01:33:06.033274 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 01:33:06.051237 systemd-logind[1554]: Session 41 logged out. Waiting for processes to exit. Jan 20 01:33:06.064128 systemd-logind[1554]: Removed session 41. Jan 20 01:33:11.158636 systemd[1]: Started sshd@42-10.0.0.13:22-10.0.0.1:36026.service - OpenSSH per-connection server daemon (10.0.0.1:36026). Jan 20 01:33:12.049369 sshd[6427]: Accepted publickey for core from 10.0.0.1 port 36026 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:12.086443 sshd-session[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:12.207176 systemd-logind[1554]: New session 42 of user core. Jan 20 01:33:12.312247 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 01:33:15.479960 sshd[6449]: Connection closed by 10.0.0.1 port 36026 Jan 20 01:33:15.485239 sshd-session[6427]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:15.588411 systemd[1]: sshd@42-10.0.0.13:22-10.0.0.1:36026.service: Deactivated successfully. Jan 20 01:33:15.661398 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 01:33:15.733299 systemd-logind[1554]: Session 42 logged out. Waiting for processes to exit. Jan 20 01:33:15.786978 systemd-logind[1554]: Removed session 42. Jan 20 01:33:20.683306 systemd[1]: Started sshd@43-10.0.0.13:22-10.0.0.1:34918.service - OpenSSH per-connection server daemon (10.0.0.1:34918). Jan 20 01:33:21.605075 sshd[6484]: Accepted publickey for core from 10.0.0.1 port 34918 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:21.615295 sshd-session[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:21.694706 systemd-logind[1554]: New session 43 of user core. Jan 20 01:33:21.732716 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 01:33:23.429050 sshd[6487]: Connection closed by 10.0.0.1 port 34918 Jan 20 01:33:23.428342 sshd-session[6484]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:23.587600 systemd-logind[1554]: Session 43 logged out. Waiting for processes to exit. Jan 20 01:33:23.599647 systemd[1]: sshd@43-10.0.0.13:22-10.0.0.1:34918.service: Deactivated successfully. Jan 20 01:33:23.674048 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 01:33:23.758025 systemd-logind[1554]: Removed session 43. Jan 20 01:33:28.618309 systemd[1]: Started sshd@44-10.0.0.13:22-10.0.0.1:49074.service - OpenSSH per-connection server daemon (10.0.0.1:49074). Jan 20 01:33:29.883960 sshd[6529]: Accepted publickey for core from 10.0.0.1 port 49074 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:29.886218 sshd-session[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:30.154962 systemd-logind[1554]: New session 44 of user core. Jan 20 01:33:30.240662 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 01:33:31.755216 kubelet[3024]: E0120 01:33:31.734199 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:32.694029 sshd[6547]: Connection closed by 10.0.0.1 port 49074 Jan 20 01:33:32.698907 sshd-session[6529]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:32.794484 systemd[1]: sshd@44-10.0.0.13:22-10.0.0.1:49074.service: Deactivated successfully. Jan 20 01:33:32.822705 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 01:33:32.873670 systemd-logind[1554]: Session 44 logged out. Waiting for processes to exit. Jan 20 01:33:32.899608 systemd-logind[1554]: Removed session 44. Jan 20 01:33:36.840249 kubelet[3024]: E0120 01:33:36.822090 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:37.834062 systemd[1]: Started sshd@45-10.0.0.13:22-10.0.0.1:56978.service - OpenSSH per-connection server daemon (10.0.0.1:56978). Jan 20 01:33:38.261956 sshd[6583]: Accepted publickey for core from 10.0.0.1 port 56978 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:38.275727 sshd-session[6583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:38.331199 systemd-logind[1554]: New session 45 of user core. Jan 20 01:33:38.369382 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 01:33:39.236129 sshd[6586]: Connection closed by 10.0.0.1 port 56978 Jan 20 01:33:39.237540 sshd-session[6583]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:39.279162 systemd[1]: sshd@45-10.0.0.13:22-10.0.0.1:56978.service: Deactivated successfully. Jan 20 01:33:39.300483 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 01:33:39.322620 systemd-logind[1554]: Session 45 logged out. Waiting for processes to exit. Jan 20 01:33:39.327067 systemd-logind[1554]: Removed session 45. Jan 20 01:33:44.471159 systemd[1]: Started sshd@46-10.0.0.13:22-10.0.0.1:56992.service - OpenSSH per-connection server daemon (10.0.0.1:56992). Jan 20 01:33:44.765652 kubelet[3024]: E0120 01:33:44.762052 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:45.982072 sshd[6628]: Accepted publickey for core from 10.0.0.1 port 56992 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:46.015092 sshd-session[6628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:46.217966 systemd-logind[1554]: New session 46 of user core. Jan 20 01:33:46.292633 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 01:33:46.765743 kubelet[3024]: E0120 01:33:46.758595 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:48.094488 sshd[6631]: Connection closed by 10.0.0.1 port 56992 Jan 20 01:33:48.104466 sshd-session[6628]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:48.128656 systemd[1]: sshd@46-10.0.0.13:22-10.0.0.1:56992.service: Deactivated successfully. Jan 20 01:33:48.163612 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 01:33:48.178930 systemd-logind[1554]: Session 46 logged out. Waiting for processes to exit. Jan 20 01:33:48.207916 systemd-logind[1554]: Removed session 46. Jan 20 01:33:52.802407 kubelet[3024]: E0120 01:33:52.777919 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:33:53.301538 systemd[1]: Started sshd@47-10.0.0.13:22-10.0.0.1:33858.service - OpenSSH per-connection server daemon (10.0.0.1:33858). Jan 20 01:33:54.525384 sshd[6672]: Accepted publickey for core from 10.0.0.1 port 33858 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:33:54.564031 sshd-session[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:33:54.696996 systemd-logind[1554]: New session 47 of user core. Jan 20 01:33:54.709453 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 01:33:56.563931 sshd[6690]: Connection closed by 10.0.0.1 port 33858 Jan 20 01:33:56.581267 sshd-session[6672]: pam_unix(sshd:session): session closed for user core Jan 20 01:33:56.615081 systemd-logind[1554]: Session 47 logged out. Waiting for processes to exit. Jan 20 01:33:56.628553 systemd[1]: sshd@47-10.0.0.13:22-10.0.0.1:33858.service: Deactivated successfully. Jan 20 01:33:56.699043 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 01:33:56.722053 systemd-logind[1554]: Removed session 47. Jan 20 01:34:01.752081 systemd[1]: Started sshd@48-10.0.0.13:22-10.0.0.1:58788.service - OpenSSH per-connection server daemon (10.0.0.1:58788). Jan 20 01:34:02.656194 sshd[6733]: Accepted publickey for core from 10.0.0.1 port 58788 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:02.706192 sshd-session[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:02.889970 systemd-logind[1554]: New session 48 of user core. Jan 20 01:34:02.952276 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 01:34:04.724517 sshd[6736]: Connection closed by 10.0.0.1 port 58788 Jan 20 01:34:04.755656 sshd-session[6733]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:04.834413 systemd-logind[1554]: Session 48 logged out. Waiting for processes to exit. Jan 20 01:34:04.865042 systemd[1]: sshd@48-10.0.0.13:22-10.0.0.1:58788.service: Deactivated successfully. Jan 20 01:34:04.925487 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 01:34:04.994907 systemd-logind[1554]: Removed session 48. Jan 20 01:34:05.756561 kubelet[3024]: E0120 01:34:05.750464 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:09.883452 systemd[1]: Started sshd@49-10.0.0.13:22-10.0.0.1:59954.service - OpenSSH per-connection server daemon (10.0.0.1:59954). Jan 20 01:34:11.191938 sshd[6770]: Accepted publickey for core from 10.0.0.1 port 59954 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:11.191383 sshd-session[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:11.366323 systemd-logind[1554]: New session 49 of user core. Jan 20 01:34:11.428592 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 01:34:12.819960 sshd[6787]: Connection closed by 10.0.0.1 port 59954 Jan 20 01:34:12.818613 sshd-session[6770]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:12.852479 systemd[1]: sshd@49-10.0.0.13:22-10.0.0.1:59954.service: Deactivated successfully. Jan 20 01:34:12.857934 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 01:34:12.869438 systemd-logind[1554]: Session 49 logged out. Waiting for processes to exit. Jan 20 01:34:12.884200 systemd-logind[1554]: Removed session 49. Jan 20 01:34:18.587319 kubelet[3024]: E0120 01:34:18.540705 3024 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.809s" Jan 20 01:34:18.703556 systemd[1]: Started sshd@50-10.0.0.13:22-10.0.0.1:52554.service - OpenSSH per-connection server daemon (10.0.0.1:52554). Jan 20 01:34:19.476262 sshd[6814]: Accepted publickey for core from 10.0.0.1 port 52554 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:19.478872 sshd-session[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:19.571313 systemd-logind[1554]: New session 50 of user core. Jan 20 01:34:19.622643 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 01:34:20.981107 sshd[6831]: Connection closed by 10.0.0.1 port 52554 Jan 20 01:34:20.989251 sshd-session[6814]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:21.056914 systemd[1]: sshd@50-10.0.0.13:22-10.0.0.1:52554.service: Deactivated successfully. Jan 20 01:34:21.094411 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 01:34:21.129261 systemd-logind[1554]: Session 50 logged out. Waiting for processes to exit. Jan 20 01:34:21.163084 systemd-logind[1554]: Removed session 50. Jan 20 01:34:25.746460 kubelet[3024]: E0120 01:34:25.743399 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:34:26.188884 systemd[1]: Started sshd@51-10.0.0.13:22-10.0.0.1:38140.service - OpenSSH per-connection server daemon (10.0.0.1:38140). Jan 20 01:34:27.124579 sshd[6867]: Accepted publickey for core from 10.0.0.1 port 38140 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:27.179417 sshd-session[6867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:27.320030 systemd-logind[1554]: New session 51 of user core. Jan 20 01:34:27.356013 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 01:34:28.909252 sshd[6876]: Connection closed by 10.0.0.1 port 38140 Jan 20 01:34:28.911495 sshd-session[6867]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:28.965215 systemd-logind[1554]: Session 51 logged out. Waiting for processes to exit. Jan 20 01:34:28.966652 systemd[1]: sshd@51-10.0.0.13:22-10.0.0.1:38140.service: Deactivated successfully. Jan 20 01:34:28.984172 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 01:34:29.012524 systemd-logind[1554]: Removed session 51. Jan 20 01:34:34.294236 systemd[1]: Started sshd@52-10.0.0.13:22-10.0.0.1:38154.service - OpenSSH per-connection server daemon (10.0.0.1:38154). Jan 20 01:34:35.120973 sshd[6911]: Accepted publickey for core from 10.0.0.1 port 38154 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:34:35.177261 sshd-session[6911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:35.370097 systemd-logind[1554]: New session 52 of user core. Jan 20 01:34:35.395354 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 01:34:37.274680 sshd[6918]: Connection closed by 10.0.0.1 port 38154 Jan 20 01:34:37.282612 sshd-session[6911]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:37.326688 systemd[1]: sshd@52-10.0.0.13:22-10.0.0.1:38154.service: Deactivated successfully. Jan 20 01:34:37.357636 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 01:34:37.439037 systemd-logind[1554]: Session 52 logged out. Waiting for processes to exit. Jan 20 01:34:37.502427 systemd-logind[1554]: Removed session 52. Jan 20 01:34:37.736359 kubelet[3024]: E0120 01:34:37.733548 3024 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"