Jan 20 02:23:02.460178 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 02:23:02.460214 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:23:02.460269 kernel: BIOS-provided physical RAM map: Jan 20 02:23:02.460281 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 02:23:02.460289 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 02:23:02.460297 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 02:23:02.460306 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 02:23:02.460315 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 02:23:02.460354 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 02:23:02.460367 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 02:23:02.460375 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:23:02.460388 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 02:23:02.460397 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:23:02.460405 kernel: NX (Execute Disable) protection: active Jan 20 02:23:02.460415 kernel: APIC: Static calls initialized Jan 20 02:23:02.460424 kernel: SMBIOS 2.8 present. Jan 20 02:23:02.460467 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 02:23:02.460476 kernel: DMI: Memory slots populated: 1/1 Jan 20 02:23:02.460485 kernel: Hypervisor detected: KVM Jan 20 02:23:02.460494 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:23:02.460505 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 02:23:02.460517 kernel: kvm-clock: using sched offset of 36636440352 cycles Jan 20 02:23:02.460526 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 02:23:02.460535 kernel: tsc: Detected 2445.426 MHz processor Jan 20 02:23:02.460544 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 02:23:02.460554 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 02:23:02.460568 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:23:02.460578 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 02:23:02.460588 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 02:23:02.460598 kernel: Using GB pages for direct mapping Jan 20 02:23:02.460607 kernel: ACPI: Early table checksum verification disabled Jan 20 02:23:02.460618 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 02:23:02.460628 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:02.460638 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:02.460648 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:02.460662 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 02:23:02.460671 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:02.460680 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:02.460690 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:02.460699 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:23:02.460713 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 02:23:02.460725 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 02:23:02.460736 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 02:23:02.460747 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 02:23:02.460758 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 02:23:02.460769 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 02:23:02.460779 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 02:23:02.460790 kernel: No NUMA configuration found Jan 20 02:23:02.460801 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 02:23:02.460815 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 02:23:02.460825 kernel: Zone ranges: Jan 20 02:23:02.460836 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 02:23:02.460847 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 02:23:02.460858 kernel: Normal empty Jan 20 02:23:02.460869 kernel: Device empty Jan 20 02:23:02.460879 kernel: Movable zone start for each node Jan 20 02:23:02.460890 kernel: Early memory node ranges Jan 20 02:23:02.460901 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 02:23:02.460915 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 02:23:02.460925 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 02:23:02.460937 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:23:02.460947 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 02:23:02.460986 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 02:23:02.460998 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 02:23:02.461008 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 02:23:02.461019 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 02:23:02.461030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 02:23:02.462484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 02:23:02.462498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 02:23:02.462512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 02:23:02.462522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 02:23:02.462531 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 02:23:02.462540 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 02:23:02.462550 kernel: TSC deadline timer available Jan 20 02:23:02.462559 kernel: CPU topo: Max. logical packages: 1 Jan 20 02:23:02.462568 kernel: CPU topo: Max. logical dies: 1 Jan 20 02:23:02.462587 kernel: CPU topo: Max. dies per package: 1 Jan 20 02:23:02.462598 kernel: CPU topo: Max. threads per core: 1 Jan 20 02:23:02.462608 kernel: CPU topo: Num. cores per package: 4 Jan 20 02:23:02.462617 kernel: CPU topo: Num. threads per package: 4 Jan 20 02:23:02.462626 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 02:23:02.462635 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 02:23:02.462644 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 02:23:02.462654 kernel: kvm-guest: setup PV sched yield Jan 20 02:23:02.462667 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 02:23:02.462681 kernel: Booting paravirtualized kernel on KVM Jan 20 02:23:02.462691 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 02:23:02.462700 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 02:23:02.462710 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 02:23:02.462720 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 02:23:02.462731 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 02:23:02.462743 kernel: kvm-guest: PV spinlocks enabled Jan 20 02:23:02.462753 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 02:23:02.462764 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:23:02.462779 kernel: random: crng init done Jan 20 02:23:02.462788 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 02:23:02.462797 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 02:23:02.462837 kernel: Fallback order for Node 0: 0 Jan 20 02:23:02.462847 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 02:23:02.462857 kernel: Policy zone: DMA32 Jan 20 02:23:02.462866 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 02:23:02.462875 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 02:23:02.462888 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 02:23:02.462903 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 02:23:02.462913 kernel: Dynamic Preempt: voluntary Jan 20 02:23:02.462922 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 02:23:02.462933 kernel: rcu: RCU event tracing is enabled. Jan 20 02:23:02.462942 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 02:23:02.462952 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 02:23:02.462997 kernel: Rude variant of Tasks RCU enabled. Jan 20 02:23:02.463008 kernel: Tracing variant of Tasks RCU enabled. Jan 20 02:23:02.463018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 02:23:02.463032 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 02:23:02.463178 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:23:02.463190 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:23:02.463200 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:23:02.463209 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 02:23:02.463219 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 02:23:02.463280 kernel: Console: colour VGA+ 80x25 Jan 20 02:23:02.463291 kernel: printk: legacy console [ttyS0] enabled Jan 20 02:23:02.463301 kernel: ACPI: Core revision 20240827 Jan 20 02:23:02.463311 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 02:23:02.463321 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 02:23:02.463339 kernel: x2apic enabled Jan 20 02:23:02.463349 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 02:23:02.463389 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 02:23:02.463403 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 02:23:02.463414 kernel: kvm-guest: setup PV IPIs Jan 20 02:23:02.463429 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 02:23:02.463439 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:23:02.463449 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 02:23:02.463460 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 02:23:02.463473 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 02:23:02.463485 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 02:23:02.463495 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 02:23:02.463505 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 02:23:02.463515 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 02:23:02.463529 kernel: Speculative Store Bypass: Vulnerable Jan 20 02:23:02.463541 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 02:23:02.463553 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 02:23:02.463565 kernel: active return thunk: srso_alias_return_thunk Jan 20 02:23:02.463576 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 02:23:02.463587 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 02:23:02.463599 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 02:23:02.463610 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 02:23:02.463625 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 02:23:02.463636 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 02:23:02.463647 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 02:23:02.463658 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 02:23:02.463669 kernel: Freeing SMP alternatives memory: 32K Jan 20 02:23:02.463679 kernel: pid_max: default: 32768 minimum: 301 Jan 20 02:23:02.463691 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 02:23:02.463702 kernel: landlock: Up and running. Jan 20 02:23:02.463713 kernel: SELinux: Initializing. Jan 20 02:23:02.463728 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:23:02.463739 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:23:02.463779 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 02:23:02.463791 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 02:23:02.463802 kernel: signal: max sigframe size: 1776 Jan 20 02:23:02.463813 kernel: rcu: Hierarchical SRCU implementation. Jan 20 02:23:02.463825 kernel: rcu: Max phase no-delay instances is 400. Jan 20 02:23:02.463837 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 02:23:02.463848 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 02:23:02.463863 kernel: smp: Bringing up secondary CPUs ... Jan 20 02:23:02.463874 kernel: smpboot: x86: Booting SMP configuration: Jan 20 02:23:02.463886 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 02:23:02.463897 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 02:23:02.463908 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 02:23:02.463920 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145096K reserved, 0K cma-reserved) Jan 20 02:23:02.463932 kernel: devtmpfs: initialized Jan 20 02:23:02.463943 kernel: x86/mm: Memory block size: 128MB Jan 20 02:23:02.463954 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 02:23:02.463969 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 02:23:02.463980 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 02:23:02.463992 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 02:23:02.464003 kernel: audit: initializing netlink subsys (disabled) Jan 20 02:23:02.464014 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 02:23:02.464026 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 02:23:02.467319 kernel: audit: type=2000 audit(1768875764.930:1): state=initialized audit_enabled=0 res=1 Jan 20 02:23:02.467341 kernel: cpuidle: using governor menu Jan 20 02:23:02.467352 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 02:23:02.467369 kernel: dca service started, version 1.12.1 Jan 20 02:23:02.467383 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 02:23:02.467393 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 02:23:02.467403 kernel: PCI: Using configuration type 1 for base access Jan 20 02:23:02.467413 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 02:23:02.467424 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 02:23:02.467435 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 02:23:02.467447 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 02:23:02.467459 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 02:23:02.467476 kernel: ACPI: Added _OSI(Module Device) Jan 20 02:23:02.467485 kernel: ACPI: Added _OSI(Processor Device) Jan 20 02:23:02.467495 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 02:23:02.467505 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 02:23:02.467515 kernel: ACPI: Interpreter enabled Jan 20 02:23:02.467525 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 02:23:02.467538 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 02:23:02.467550 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 02:23:02.467561 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 02:23:02.467575 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 02:23:02.467586 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 02:23:02.468030 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 02:23:02.468337 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 02:23:02.468513 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 02:23:02.468531 kernel: PCI host bridge to bus 0000:00 Jan 20 02:23:02.468814 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 02:23:02.468982 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 02:23:02.469195 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 02:23:02.472563 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 02:23:02.472742 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 02:23:02.472943 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 02:23:02.473428 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 02:23:02.473778 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 02:23:02.474102 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 02:23:02.474322 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 02:23:02.474817 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 02:23:02.474996 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 02:23:02.475312 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 02:23:02.475490 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 16601 usecs Jan 20 02:23:02.475682 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 02:23:02.475853 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 02:23:02.476020 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 02:23:02.476299 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 02:23:02.476582 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 02:23:02.476766 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 02:23:02.476949 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 02:23:02.477213 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 02:23:02.485586 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 02:23:02.485781 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 02:23:02.485962 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 02:23:02.486223 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 02:23:02.489563 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 02:23:02.489847 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 02:23:02.490032 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 02:23:02.490327 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 10742 usecs Jan 20 02:23:02.490605 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 02:23:02.490793 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 02:23:02.490966 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 02:23:02.496020 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 02:23:02.496309 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 02:23:02.496327 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 02:23:02.496342 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 02:23:02.496356 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 02:23:02.496366 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 02:23:02.496376 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 02:23:02.496386 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 02:23:02.496397 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 02:23:02.496407 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 02:23:02.496426 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 02:23:02.496439 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 02:23:02.496449 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 02:23:02.496459 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 02:23:02.496469 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 02:23:02.496479 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 02:23:02.496490 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 02:23:02.496501 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 02:23:02.496513 kernel: iommu: Default domain type: Translated Jan 20 02:23:02.496531 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 02:23:02.496543 kernel: PCI: Using ACPI for IRQ routing Jan 20 02:23:02.496553 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 02:23:02.496564 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 02:23:02.496574 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 02:23:02.496757 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 02:23:02.496929 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 02:23:02.497160 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 02:23:02.497177 kernel: vgaarb: loaded Jan 20 02:23:02.497197 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 02:23:02.497208 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 02:23:02.497218 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 02:23:02.497228 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 02:23:02.500317 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 02:23:02.500335 kernel: pnp: PnP ACPI init Jan 20 02:23:02.500669 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 02:23:02.500689 kernel: pnp: PnP ACPI: found 6 devices Jan 20 02:23:02.500706 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 02:23:02.500716 kernel: NET: Registered PF_INET protocol family Jan 20 02:23:02.500728 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 02:23:02.500769 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 02:23:02.500780 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 02:23:02.500790 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 02:23:02.500800 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 02:23:02.500811 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 02:23:02.500822 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:23:02.500840 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:23:02.500850 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 02:23:02.500860 kernel: NET: Registered PF_XDP protocol family Jan 20 02:23:02.501095 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 02:23:02.501309 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 02:23:02.501476 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 02:23:02.501637 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 02:23:02.501798 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 02:23:02.501960 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 02:23:02.501977 kernel: PCI: CLS 0 bytes, default 64 Jan 20 02:23:02.501988 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:23:02.501999 kernel: Initialise system trusted keyrings Jan 20 02:23:02.502009 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 02:23:02.502019 kernel: Key type asymmetric registered Jan 20 02:23:02.502030 kernel: Asymmetric key parser 'x509' registered Jan 20 02:23:02.502103 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 02:23:02.502115 kernel: io scheduler mq-deadline registered Jan 20 02:23:02.502131 kernel: io scheduler kyber registered Jan 20 02:23:02.502142 kernel: io scheduler bfq registered Jan 20 02:23:02.502153 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 02:23:02.502165 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 02:23:02.502176 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 02:23:02.502188 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 02:23:02.502198 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 02:23:02.502209 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 02:23:02.502220 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 02:23:02.504350 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 02:23:02.504366 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 02:23:02.504617 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 02:23:02.504788 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 02:23:02.504804 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 20 02:23:02.504962 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T02:22:59 UTC (1768875779) Jan 20 02:23:02.505188 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 02:23:02.505205 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 02:23:02.505224 kernel: NET: Registered PF_INET6 protocol family Jan 20 02:23:02.508647 kernel: Segment Routing with IPv6 Jan 20 02:23:02.508666 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 02:23:02.508677 kernel: NET: Registered PF_PACKET protocol family Jan 20 02:23:02.508687 kernel: Key type dns_resolver registered Jan 20 02:23:02.508697 kernel: IPI shorthand broadcast: enabled Jan 20 02:23:02.508708 kernel: sched_clock: Marking stable (12216043616, 1255743835)->(16140773138, -2668985687) Jan 20 02:23:02.508720 kernel: registered taskstats version 1 Jan 20 02:23:02.508732 kernel: Loading compiled-in X.509 certificates Jan 20 02:23:02.508779 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 02:23:02.508789 kernel: Demotion targets for Node 0: null Jan 20 02:23:02.508798 kernel: Key type .fscrypt registered Jan 20 02:23:02.508808 kernel: Key type fscrypt-provisioning registered Jan 20 02:23:02.508819 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 02:23:02.508832 kernel: ima: Allocated hash algorithm: sha1 Jan 20 02:23:02.508843 kernel: ima: No architecture policies found Jan 20 02:23:02.508853 kernel: clk: Disabling unused clocks Jan 20 02:23:02.508863 kernel: Warning: unable to open an initial console. Jan 20 02:23:02.508878 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 02:23:02.508888 kernel: Write protecting the kernel read-only data: 40960k Jan 20 02:23:02.508899 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 02:23:02.508911 kernel: Run /init as init process Jan 20 02:23:02.508923 kernel: with arguments: Jan 20 02:23:02.508935 kernel: /init Jan 20 02:23:02.508946 kernel: with environment: Jan 20 02:23:02.508956 kernel: HOME=/ Jan 20 02:23:02.508967 kernel: TERM=linux Jan 20 02:23:02.508984 systemd[1]: Successfully made /usr/ read-only. Jan 20 02:23:02.509001 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:23:02.509014 systemd[1]: Detected virtualization kvm. Jan 20 02:23:02.509026 systemd[1]: Detected architecture x86-64. Jan 20 02:23:02.509096 systemd[1]: Running in initrd. Jan 20 02:23:02.509111 systemd[1]: No hostname configured, using default hostname. Jan 20 02:23:02.509122 systemd[1]: Hostname set to . Jan 20 02:23:02.509138 systemd[1]: Initializing machine ID from VM UUID. Jan 20 02:23:02.509165 kernel: hrtimer: interrupt took 9988623 ns Jan 20 02:23:02.509182 systemd[1]: Queued start job for default target initrd.target. Jan 20 02:23:02.509194 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:23:02.509205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:23:02.509217 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 02:23:02.509272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:23:02.509287 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 02:23:02.509302 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 02:23:02.509316 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 02:23:02.509329 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 02:23:02.509350 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:23:02.509361 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:23:02.509376 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:23:02.509387 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:23:02.509397 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:23:02.509408 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:23:02.509423 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:23:02.509435 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:23:02.509446 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 02:23:02.509457 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 02:23:02.509472 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:23:02.509484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:23:02.509498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:23:02.509510 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:23:02.509523 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 02:23:02.509537 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:23:02.509548 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 02:23:02.509559 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 02:23:02.509570 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 02:23:02.509586 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:23:02.509597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:23:02.509612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:23:02.509672 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 02:23:02.509710 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 02:23:02.509723 systemd-journald[203]: Journal started Jan 20 02:23:02.509750 systemd-journald[203]: Runtime Journal (/run/log/journal/b8fe166bf7984101b86321d7f00f3243) is 6M, max 48.3M, 42.2M free. Jan 20 02:23:02.530381 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:23:02.547614 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:23:02.563847 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 02:23:02.613272 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:23:02.623296 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:23:02.793111 systemd-tmpfiles[214]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 02:23:03.503995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 02:23:03.504106 kernel: Bridge firewalling registered Jan 20 02:23:02.793360 systemd-modules-load[205]: Inserted module 'overlay' Jan 20 02:23:02.879112 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:23:03.444771 systemd-modules-load[205]: Inserted module 'br_netfilter' Jan 20 02:23:03.486945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:23:03.522225 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:23:03.599367 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 02:23:03.655017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:23:03.810111 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:23:03.810693 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:23:03.890562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:23:03.943013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:23:04.008561 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:23:04.036515 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 02:23:04.148910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:23:04.217700 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 02:23:04.476499 systemd-resolved[240]: Positive Trust Anchors: Jan 20 02:23:04.476542 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:23:04.476585 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:23:04.493460 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 20 02:23:04.498604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:23:04.551432 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:23:05.140377 kernel: SCSI subsystem initialized Jan 20 02:23:05.226111 kernel: Loading iSCSI transport class v2.0-870. Jan 20 02:23:05.364081 kernel: iscsi: registered transport (tcp) Jan 20 02:23:05.543984 kernel: iscsi: registered transport (qla4xxx) Jan 20 02:23:05.544127 kernel: QLogic iSCSI HBA Driver Jan 20 02:23:05.743656 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:23:05.861728 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:23:05.895657 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:23:06.425722 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 02:23:06.448593 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 02:23:06.743236 kernel: raid6: avx2x4 gen() 6159 MB/s Jan 20 02:23:06.771690 kernel: raid6: avx2x2 gen() 7760 MB/s Jan 20 02:23:06.792482 kernel: raid6: avx2x1 gen() 3188 MB/s Jan 20 02:23:06.792566 kernel: raid6: using algorithm avx2x2 gen() 7760 MB/s Jan 20 02:23:06.828175 kernel: raid6: .... xor() 12292 MB/s, rmw enabled Jan 20 02:23:06.828258 kernel: raid6: using avx2x2 recovery algorithm Jan 20 02:23:06.899778 kernel: xor: automatically using best checksumming function avx Jan 20 02:23:08.125826 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 02:23:08.195529 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:23:08.240236 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:23:08.417130 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 20 02:23:08.464491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:23:08.521355 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 02:23:08.738529 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jan 20 02:23:08.924797 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:23:08.959499 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:23:09.465557 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:23:09.511276 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 02:23:09.934674 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 02:23:09.946887 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:23:09.950744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:23:10.019402 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:23:10.064642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:23:10.079267 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:23:10.422445 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 02:23:10.463907 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 02:23:10.474122 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 02:23:10.474211 kernel: GPT:9289727 != 19775487 Jan 20 02:23:10.474434 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 02:23:10.474454 kernel: GPT:9289727 != 19775487 Jan 20 02:23:10.474470 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 02:23:10.474484 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:23:11.088175 kernel: libata version 3.00 loaded. Jan 20 02:23:11.305411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:23:11.654471 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 02:23:11.755218 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 02:23:11.803191 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 02:23:11.875526 kernel: AES CTR mode by8 optimization enabled Jan 20 02:23:11.875623 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 02:23:11.875941 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 02:23:11.877336 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 02:23:12.189811 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 02:23:12.193609 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 02:23:12.193813 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 02:23:12.194010 kernel: scsi host0: ahci Jan 20 02:23:12.197452 kernel: scsi host1: ahci Jan 20 02:23:12.197671 kernel: scsi host2: ahci Jan 20 02:23:12.197877 kernel: scsi host3: ahci Jan 20 02:23:12.198164 kernel: scsi host4: ahci Jan 20 02:23:12.198415 kernel: scsi host5: ahci Jan 20 02:23:12.198762 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 20 02:23:12.198781 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 20 02:23:12.198802 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 20 02:23:12.198818 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 20 02:23:12.198837 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 20 02:23:12.198852 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 20 02:23:11.972277 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 02:23:12.113265 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:23:12.277776 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 02:23:12.402503 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:12.422874 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:12.433607 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:12.435748 disk-uuid[612]: Primary Header is updated. Jan 20 02:23:12.435748 disk-uuid[612]: Secondary Entries is updated. Jan 20 02:23:12.435748 disk-uuid[612]: Secondary Header is updated. Jan 20 02:23:12.598450 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:12.598485 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 02:23:12.598505 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:23:12.598521 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 02:23:12.598534 kernel: ata3.00: applying bridge limits Jan 20 02:23:12.598547 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 02:23:12.598561 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:23:12.598586 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:23:12.598602 kernel: ata3.00: configured for UDMA/100 Jan 20 02:23:12.598616 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 02:23:12.980789 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 02:23:12.981231 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 02:23:13.049127 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 02:23:13.602436 disk-uuid[613]: The operation has completed successfully. Jan 20 02:23:13.631119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:23:14.057598 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 02:23:14.079848 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 02:23:14.080113 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 02:23:14.187138 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:23:14.203434 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:23:14.225398 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:23:14.262597 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 02:23:14.365590 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 02:23:14.438537 sh[649]: Success Jan 20 02:23:14.423107 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:23:14.557015 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 02:23:14.557142 kernel: device-mapper: uevent: version 1.0.3 Jan 20 02:23:14.563851 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 02:23:14.744910 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 02:23:14.997575 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:23:15.039527 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 02:23:15.124464 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 02:23:15.198915 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (663) Jan 20 02:23:15.234479 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 02:23:15.234547 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:23:15.325253 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 02:23:15.325426 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 02:23:15.357468 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 02:23:15.372706 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:23:15.382409 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 02:23:15.397134 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 02:23:15.429698 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 02:23:15.740512 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (714) Jan 20 02:23:15.761366 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:23:15.761421 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:23:15.820985 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:23:15.821115 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:23:15.858602 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:23:15.888391 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 02:23:15.914297 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 02:23:16.253581 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:23:16.307550 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:23:16.387836 ignition[772]: Ignition 2.22.0 Jan 20 02:23:16.389953 ignition[772]: Stage: fetch-offline Jan 20 02:23:16.390014 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:23:16.390028 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:23:16.390208 ignition[772]: parsed url from cmdline: "" Jan 20 02:23:16.390215 ignition[772]: no config URL provided Jan 20 02:23:16.390223 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 02:23:16.390235 ignition[772]: no config at "/usr/lib/ignition/user.ign" Jan 20 02:23:16.390264 ignition[772]: op(1): [started] loading QEMU firmware config module Jan 20 02:23:16.390272 ignition[772]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 02:23:16.457382 ignition[772]: op(1): [finished] loading QEMU firmware config module Jan 20 02:23:16.745688 systemd-networkd[836]: lo: Link UP Jan 20 02:23:16.747323 systemd-networkd[836]: lo: Gained carrier Jan 20 02:23:16.800892 systemd-networkd[836]: Enumeration completed Jan 20 02:23:16.782772 ignition[772]: parsing config with SHA512: 01a4d635add4150b5888a49013021be56ad5de4646cd01b1fbcbcab7e451b64be76cac902f7d5565766147b8dfb12c87616c5143e4ccccc5a756b6f238b078a1 Jan 20 02:23:16.805229 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:23:16.821632 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:23:16.821639 systemd-networkd[836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:23:16.889839 systemd-networkd[836]: eth0: Link UP Jan 20 02:23:16.914208 systemd[1]: Reached target network.target - Network. Jan 20 02:23:16.914564 systemd-networkd[836]: eth0: Gained carrier Jan 20 02:23:16.914589 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:23:16.976578 unknown[772]: fetched base config from "system" Jan 20 02:23:16.977734 ignition[772]: fetch-offline: fetch-offline passed Jan 20 02:23:16.976631 unknown[772]: fetched user config from "qemu" Jan 20 02:23:16.977865 ignition[772]: Ignition finished successfully Jan 20 02:23:17.114482 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:23:17.151836 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 02:23:17.203286 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 02:23:17.350601 systemd-networkd[836]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:23:17.889405 ignition[844]: Ignition 2.22.0 Jan 20 02:23:17.889419 ignition[844]: Stage: kargs Jan 20 02:23:17.889618 ignition[844]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:23:17.889631 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:23:18.000541 ignition[844]: kargs: kargs passed Jan 20 02:23:18.000617 ignition[844]: Ignition finished successfully Jan 20 02:23:18.041108 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 02:23:18.098658 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 02:23:18.593774 systemd-networkd[836]: eth0: Gained IPv6LL Jan 20 02:23:21.244417 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1879408194 wd_nsec: 1879407308 Jan 20 02:23:21.399881 ignition[853]: Ignition 2.22.0 Jan 20 02:23:21.400625 ignition[853]: Stage: disks Jan 20 02:23:21.437435 ignition[853]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:23:21.437536 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:23:21.515777 ignition[853]: disks: disks passed Jan 20 02:23:21.516228 ignition[853]: Ignition finished successfully Jan 20 02:23:21.630865 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 02:23:21.700366 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 02:23:21.749961 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 02:23:21.805551 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:23:21.870927 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:23:21.914167 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:23:21.937860 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 02:23:22.276749 systemd-fsck[862]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 02:23:22.324907 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 02:23:22.420996 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 02:23:23.529708 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 02:23:23.600991 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 02:23:23.781291 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 02:23:23.882558 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:23:23.920642 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 02:23:23.994281 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 02:23:23.994370 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 02:23:23.994463 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:23:24.179200 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Jan 20 02:23:24.078035 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 02:23:24.263569 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:23:24.263644 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:23:24.159871 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 02:23:24.330428 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:23:24.330517 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:23:24.347705 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:23:24.753270 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 02:23:24.804664 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Jan 20 02:23:24.893316 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 02:23:24.945139 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 02:23:26.199602 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 02:23:26.247713 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 02:23:26.272138 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 02:23:26.415809 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 02:23:26.465528 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:23:26.837676 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 02:23:27.046734 ignition[984]: INFO : Ignition 2.22.0 Jan 20 02:23:27.112956 ignition[984]: INFO : Stage: mount Jan 20 02:23:27.133648 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:23:27.133648 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:23:27.198145 ignition[984]: INFO : mount: mount passed Jan 20 02:23:27.198145 ignition[984]: INFO : Ignition finished successfully Jan 20 02:23:27.228241 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 02:23:27.288628 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 02:23:27.462760 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:23:27.892615 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Jan 20 02:23:27.918177 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 02:23:27.918265 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:23:28.018345 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:23:28.018538 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:23:28.078023 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:23:28.294085 ignition[1014]: INFO : Ignition 2.22.0 Jan 20 02:23:28.294085 ignition[1014]: INFO : Stage: files Jan 20 02:23:28.294085 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:23:28.294085 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:23:28.294085 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Jan 20 02:23:28.380308 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 02:23:28.380308 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 02:23:28.424125 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 02:23:28.452219 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 02:23:28.452219 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 02:23:28.433356 unknown[1014]: wrote ssh authorized keys file for user: core Jan 20 02:23:28.548177 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 02:23:28.548177 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 02:23:28.931847 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 02:23:33.128707 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 02:23:33.128707 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 02:23:33.275811 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 02:23:33.793805 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 02:23:35.956401 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 02:23:35.956401 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 02:23:35.993913 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:23:36.046623 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:23:36.046623 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 02:23:36.046623 ignition[1014]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 02:23:36.046623 ignition[1014]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:23:36.046623 ignition[1014]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:23:36.046623 ignition[1014]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 02:23:36.046623 ignition[1014]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 02:23:36.316517 ignition[1014]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:23:36.373429 ignition[1014]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:23:36.394741 ignition[1014]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 02:23:36.394741 ignition[1014]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 02:23:36.394741 ignition[1014]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 02:23:36.394741 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:23:36.394741 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:23:36.394741 ignition[1014]: INFO : files: files passed Jan 20 02:23:36.394741 ignition[1014]: INFO : Ignition finished successfully Jan 20 02:23:36.489958 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 02:23:36.642787 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 02:23:36.701129 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 02:23:36.772631 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 02:23:36.778672 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 02:23:36.878582 initrd-setup-root-after-ignition[1044]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 02:23:36.930415 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:23:36.930415 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:23:36.993556 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:23:37.023322 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:23:37.116844 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 02:23:37.146357 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 02:23:37.436614 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 02:23:37.438351 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 02:23:37.499883 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 02:23:37.533254 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 02:23:37.551882 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 02:23:37.558281 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 02:23:37.794008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:23:37.834797 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 02:23:38.027219 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:23:38.065694 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:23:38.099560 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 02:23:38.114393 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 02:23:38.114678 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:23:38.115148 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 02:23:38.115366 systemd[1]: Stopped target basic.target - Basic System. Jan 20 02:23:38.119921 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 02:23:38.149378 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:23:38.293902 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 02:23:38.342159 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:23:38.384903 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 02:23:38.429106 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:23:38.463990 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 02:23:38.476335 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 02:23:38.512865 systemd[1]: Stopped target swap.target - Swaps. Jan 20 02:23:38.525462 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 02:23:38.525711 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:23:38.526129 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:23:38.526273 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:23:38.526368 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 02:23:38.530134 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:23:38.530305 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 02:23:38.530509 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 02:23:38.530778 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 02:23:38.530914 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:23:38.531184 systemd[1]: Stopped target paths.target - Path Units. Jan 20 02:23:38.531279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 02:23:38.545285 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:23:38.681339 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 02:23:38.708347 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 02:23:38.980803 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 02:23:38.987278 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:23:39.023391 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 02:23:39.025857 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:23:39.047798 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 02:23:39.048090 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:23:39.100138 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 02:23:39.100328 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 02:23:39.145223 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 02:23:39.207736 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 02:23:39.236666 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 02:23:39.236993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:23:39.392812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 02:23:39.393137 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:23:39.448211 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 02:23:39.448402 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 02:23:39.540119 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 02:23:39.552734 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 02:23:39.583416 ignition[1070]: INFO : Ignition 2.22.0 Jan 20 02:23:39.583416 ignition[1070]: INFO : Stage: umount Jan 20 02:23:39.583416 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:23:39.583416 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:23:39.583416 ignition[1070]: INFO : umount: umount passed Jan 20 02:23:39.583416 ignition[1070]: INFO : Ignition finished successfully Jan 20 02:23:39.552918 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 02:23:39.594297 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 02:23:39.595606 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 02:23:39.700726 systemd[1]: Stopped target network.target - Network. Jan 20 02:23:39.727035 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 02:23:39.727253 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 02:23:39.738333 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 02:23:39.738458 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 02:23:39.753261 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 02:23:39.753386 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 02:23:39.769952 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 02:23:39.770174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 02:23:39.788273 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 02:23:39.788393 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 02:23:39.814379 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 02:23:39.919189 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 02:23:40.001201 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 02:23:40.001394 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 02:23:40.065791 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 02:23:40.070269 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 02:23:40.163873 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 02:23:40.164819 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:23:40.222727 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 02:23:40.238089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 02:23:40.238225 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:23:40.272870 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:23:40.387298 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 02:23:40.392616 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 02:23:40.451731 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 02:23:40.457595 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 02:23:40.462650 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:23:40.539883 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 02:23:40.540307 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 02:23:40.606674 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 02:23:40.609571 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:23:40.685270 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 02:23:40.685458 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:23:40.704452 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 02:23:40.704583 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 02:23:40.753693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 02:23:40.753828 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:23:40.832794 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 02:23:40.936749 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 02:23:40.937830 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:23:41.031848 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:23:41.033299 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:23:41.107867 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 02:23:41.109116 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 02:23:41.181905 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 02:23:41.183766 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:23:41.241348 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 02:23:41.245547 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:23:41.332013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 02:23:41.336098 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:23:41.429873 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 02:23:41.439794 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:23:41.539919 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:23:41.540202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:23:41.584646 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 02:23:41.584749 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 02:23:41.584822 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 20 02:23:41.584890 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 02:23:41.584962 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:23:41.585031 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 02:23:41.597808 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 02:23:41.598102 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 02:23:41.621372 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 02:23:41.621886 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 02:23:41.833343 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 02:23:41.897788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 02:23:42.150143 systemd[1]: Switching root. Jan 20 02:23:42.302992 systemd-journald[203]: Journal stopped Jan 20 02:23:54.852091 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 02:23:54.852237 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 02:23:54.852273 kernel: SELinux: policy capability open_perms=1 Jan 20 02:23:54.852299 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 02:23:54.855853 kernel: SELinux: policy capability always_check_network=0 Jan 20 02:23:54.855885 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 02:23:54.855900 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 02:23:54.855914 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 02:23:54.855929 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 02:23:54.855944 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 02:23:54.855961 kernel: audit: type=1403 audit(1768875823.476:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 02:23:54.855992 systemd[1]: Successfully loaded SELinux policy in 393.991ms. Jan 20 02:23:54.856032 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 67.026ms. Jan 20 02:23:54.865772 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:23:54.865830 systemd[1]: Detected virtualization kvm. Jan 20 02:23:54.865851 systemd[1]: Detected architecture x86-64. Jan 20 02:23:54.865868 systemd[1]: Detected first boot. Jan 20 02:23:54.865883 systemd[1]: Initializing machine ID from VM UUID. Jan 20 02:23:54.865899 zram_generator::config[1116]: No configuration found. Jan 20 02:23:54.865952 kernel: Guest personality initialized and is inactive Jan 20 02:23:54.865981 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 02:23:54.865996 kernel: Initialized host personality Jan 20 02:23:54.866011 kernel: NET: Registered PF_VSOCK protocol family Jan 20 02:23:54.866108 systemd[1]: Populated /etc with preset unit settings. Jan 20 02:23:54.866131 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 02:23:54.866149 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 02:23:54.866165 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 02:23:54.866217 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 02:23:54.866242 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 02:23:54.866270 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 02:23:54.866290 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 02:23:54.866570 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 02:23:54.866641 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 02:23:54.866667 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 02:23:54.866687 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 02:23:54.866708 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 02:23:54.866730 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:23:54.866752 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:23:54.866782 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 02:23:54.866804 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 02:23:54.866825 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 02:23:54.866843 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:23:54.866861 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 02:23:54.866879 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:23:54.866896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:23:54.866919 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 02:23:54.866937 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 02:23:54.866954 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 02:23:54.866974 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 02:23:54.866992 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:23:54.867018 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:23:54.867094 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:23:54.867120 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:23:54.867138 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 02:23:54.867158 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 02:23:54.867180 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 02:23:54.867198 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:23:54.867216 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:23:54.867233 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:23:54.867253 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 02:23:54.867273 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 02:23:54.867293 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 02:23:54.870567 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 02:23:54.870638 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:23:54.870674 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 02:23:54.870697 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 02:23:54.870717 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 02:23:54.870740 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 02:23:54.870760 systemd[1]: Reached target machines.target - Containers. Jan 20 02:23:54.870786 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 02:23:54.870807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:23:54.870825 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:23:54.870848 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 02:23:54.870866 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:23:54.870883 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:23:54.870900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:23:54.870917 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 02:23:54.870934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:23:54.870952 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 02:23:54.870969 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 02:23:54.870990 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 02:23:54.871008 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 02:23:54.871096 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 02:23:54.871124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:23:54.871143 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:23:54.871161 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:23:54.871178 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:23:54.871195 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 02:23:54.871213 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 02:23:54.871237 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:23:54.871257 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 02:23:54.871274 systemd[1]: Stopped verity-setup.service. Jan 20 02:23:54.871292 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:23:54.874484 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 02:23:54.874526 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 02:23:54.874550 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 02:23:54.874572 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 02:23:54.874592 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 02:23:54.874647 kernel: ACPI: bus type drm_connector registered Jan 20 02:23:54.874674 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 02:23:54.875255 systemd-journald[1202]: Collecting audit messages is disabled. Jan 20 02:23:54.875820 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 02:23:54.875845 systemd-journald[1202]: Journal started Jan 20 02:23:54.875876 systemd-journald[1202]: Runtime Journal (/run/log/journal/b8fe166bf7984101b86321d7f00f3243) is 6M, max 48.3M, 42.2M free. Jan 20 02:23:49.305763 systemd[1]: Queued start job for default target multi-user.target. Jan 20 02:23:49.388512 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 02:23:49.394325 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 02:23:49.426405 systemd[1]: systemd-journald.service: Consumed 2.443s CPU time. Jan 20 02:23:54.951435 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:23:54.951535 kernel: loop: module loaded Jan 20 02:23:54.972201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:23:54.987106 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 02:23:54.987526 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 02:23:55.005978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:23:55.006444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:23:55.021494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:23:55.026172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:23:55.043902 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:23:55.059377 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:23:55.070449 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 02:23:55.084412 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 02:23:55.152271 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:23:55.175256 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 02:23:55.185440 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 02:23:55.185570 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:23:55.207120 kernel: fuse: init (API version 7.41) Jan 20 02:23:55.208350 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 02:23:55.229977 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 02:23:55.263712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:23:55.273110 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 02:23:55.296550 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 02:23:55.306892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:23:55.337138 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 02:23:55.361407 systemd-journald[1202]: Time spent on flushing to /var/log/journal/b8fe166bf7984101b86321d7f00f3243 is 301.201ms for 970 entries. Jan 20 02:23:55.361407 systemd-journald[1202]: System Journal (/var/log/journal/b8fe166bf7984101b86321d7f00f3243) is 8M, max 195.6M, 187.6M free. Jan 20 02:23:55.715964 systemd-journald[1202]: Received client request to flush runtime journal. Jan 20 02:23:55.719964 kernel: loop0: detected capacity change from 0 to 128560 Jan 20 02:23:55.379992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:23:55.433164 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 02:23:55.516144 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:23:55.602820 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:23:55.603303 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:23:55.629963 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 02:23:55.630358 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 02:23:55.690979 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:23:55.691350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:23:55.732524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:23:55.786849 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 02:23:55.811919 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 02:23:55.875710 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 02:23:56.225204 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 02:23:58.621874 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 02:23:58.691373 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 02:23:58.691719 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:23:58.833964 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 02:23:58.876789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:23:58.897223 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 02:23:59.089977 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 20 02:23:59.090007 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 20 02:23:59.130853 kernel: loop1: detected capacity change from 0 to 229808 Jan 20 02:23:59.134192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:23:59.199004 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 02:23:59.561552 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 02:23:59.589251 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 02:23:59.806127 kernel: loop2: detected capacity change from 0 to 110984 Jan 20 02:24:00.016902 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 02:24:00.147208 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:24:01.468840 kernel: loop3: detected capacity change from 0 to 128560 Jan 20 02:24:01.517758 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 20 02:24:01.517793 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 20 02:24:01.565233 kernel: loop4: detected capacity change from 0 to 229808 Jan 20 02:24:01.564920 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:24:02.434114 kernel: loop5: detected capacity change from 0 to 110984 Jan 20 02:24:02.619892 (sd-merge)[1262]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 02:24:02.620963 (sd-merge)[1262]: Merged extensions into '/usr'. Jan 20 02:24:02.673332 systemd[1]: Reload requested from client PID 1232 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 02:24:02.673372 systemd[1]: Reloading... Jan 20 02:24:03.108758 zram_generator::config[1289]: No configuration found. Jan 20 02:24:04.490518 systemd[1]: Reloading finished in 1814 ms. Jan 20 02:24:04.595851 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 02:24:04.606188 ldconfig[1227]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 02:24:04.645590 systemd[1]: Starting ensure-sysext.service... Jan 20 02:24:04.661428 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:24:04.704782 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 02:24:05.400963 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Jan 20 02:24:05.401014 systemd[1]: Reloading... Jan 20 02:24:05.600119 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:24:05.600193 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:24:05.600877 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:24:05.602211 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 02:24:05.603998 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 02:24:05.604755 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 20 02:24:05.604974 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 20 02:24:05.673494 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:24:05.673517 systemd-tmpfiles[1326]: Skipping /boot Jan 20 02:24:05.922358 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:24:05.922400 systemd-tmpfiles[1326]: Skipping /boot Jan 20 02:24:06.010842 zram_generator::config[1353]: No configuration found. Jan 20 02:24:08.877832 systemd[1]: Reloading finished in 3476 ms. Jan 20 02:24:08.943360 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 02:24:08.981335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:24:09.108974 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:24:09.142659 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 02:24:09.183364 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 02:24:09.242275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:24:09.281263 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:24:09.343263 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 02:24:09.439444 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:24:09.448569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:24:09.503260 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:24:09.544367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:24:09.602946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:24:09.628251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:24:09.628674 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:24:09.661302 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 02:24:09.676764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:24:09.694928 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 02:24:09.720464 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:24:09.742619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:24:09.768676 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:24:09.773453 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:24:09.795244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:24:09.796463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:24:09.884514 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:24:09.891286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:24:09.913380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:24:09.945253 systemd-udevd[1396]: Using default interface naming scheme 'v255'. Jan 20 02:24:09.951183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:24:09.990950 augenrules[1426]: No rules Jan 20 02:24:10.028618 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:24:10.073619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:24:10.075457 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:24:10.108334 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 02:24:10.138287 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:24:10.158168 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:24:10.161661 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:24:10.195461 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 02:24:10.233903 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 02:24:10.284959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:24:10.285382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:24:10.319274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:24:10.319656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:24:10.373824 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 02:24:10.421705 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:24:10.424252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:24:10.450406 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:24:10.522019 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 02:24:10.871172 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:24:10.887811 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:24:10.904490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:24:10.927700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:24:11.084285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:24:11.166267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:24:11.230470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:24:11.342347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:24:11.342423 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:24:11.361158 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:24:11.365947 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 02:24:11.365991 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:24:11.371233 systemd[1]: Finished ensure-sysext.service. Jan 20 02:24:11.380691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:24:11.381993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:24:11.429878 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:24:11.432219 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:24:11.444603 augenrules[1473]: /sbin/augenrules: No change Jan 20 02:24:11.452283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:24:11.452656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:24:11.477400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:24:11.483523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:24:11.509208 augenrules[1500]: No rules Jan 20 02:24:11.512217 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:24:11.516207 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:24:11.589136 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 02:24:11.605576 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:24:11.605667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:24:11.624317 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 02:24:11.880959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:24:11.903301 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 02:24:11.928306 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 02:24:12.100882 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 02:24:12.116557 kernel: ACPI: button: Power Button [PWRF] Jan 20 02:24:12.156274 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 02:24:12.288496 systemd-resolved[1395]: Positive Trust Anchors: Jan 20 02:24:12.288517 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:24:12.288560 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:24:12.360561 systemd-resolved[1395]: Defaulting to hostname 'linux'. Jan 20 02:24:12.413139 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:24:12.417528 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:24:13.929498 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 02:24:14.597510 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 02:24:15.383415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:24:15.886463 systemd-networkd[1486]: lo: Link UP Jan 20 02:24:15.886478 systemd-networkd[1486]: lo: Gained carrier Jan 20 02:24:15.905649 systemd-networkd[1486]: Enumeration completed Jan 20 02:24:15.909826 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:24:15.910200 systemd[1]: Reached target network.target - Network. Jan 20 02:24:15.945717 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:24:15.945735 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:24:15.981099 systemd-networkd[1486]: eth0: Link UP Jan 20 02:24:15.991453 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 02:24:16.014475 systemd-networkd[1486]: eth0: Gained carrier Jan 20 02:24:16.020638 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 02:24:16.031096 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 02:24:16.113189 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 02:24:16.116488 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 02:24:16.234601 systemd-networkd[1486]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:24:16.241228 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Jan 20 02:24:17.077001 systemd-resolved[1395]: Clock change detected. Flushing caches. Jan 20 02:24:17.077219 systemd-timesyncd[1513]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 02:24:17.077503 systemd-timesyncd[1513]: Initial clock synchronization to Tue 2026-01-20 02:24:17.076785 UTC. Jan 20 02:24:17.371869 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 02:24:18.543341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:24:18.612346 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:24:18.896592 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 02:24:18.914969 systemd-networkd[1486]: eth0: Gained IPv6LL Jan 20 02:24:19.109692 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 02:24:19.135717 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 02:24:19.143064 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 02:24:19.173155 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 02:24:19.198274 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 02:24:19.241389 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 02:24:19.250152 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:24:19.304662 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:24:19.376255 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 02:24:19.415937 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 02:24:19.443401 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 02:24:19.472759 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 02:24:19.483311 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 02:24:19.499422 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 02:24:19.515237 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 02:24:19.527676 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 02:24:19.537750 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 02:24:19.546273 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 02:24:19.575357 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:24:19.585079 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:24:19.595411 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:24:19.595557 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:24:19.606703 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 02:24:19.616314 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 02:24:19.640734 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 02:24:19.680790 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 02:24:19.737497 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 02:24:19.780131 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 02:24:19.806859 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 02:24:19.981713 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 02:24:20.034587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:24:20.043344 jq[1554]: false Jan 20 02:24:20.195751 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 02:24:20.209033 extend-filesystems[1555]: Found /dev/vda6 Jan 20 02:24:20.328782 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 02:24:20.381790 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing passwd entry cache Jan 20 02:24:20.383943 oslogin_cache_refresh[1556]: Refreshing passwd entry cache Jan 20 02:24:20.403506 extend-filesystems[1555]: Found /dev/vda9 Jan 20 02:24:20.397890 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 02:24:20.433821 oslogin_cache_refresh[1556]: Failure getting users, quitting Jan 20 02:24:20.439053 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting users, quitting Jan 20 02:24:20.439053 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:24:20.439053 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing group entry cache Jan 20 02:24:20.430711 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 02:24:20.433853 oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:24:20.433932 oslogin_cache_refresh[1556]: Refreshing group entry cache Jan 20 02:24:20.511423 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting groups, quitting Jan 20 02:24:20.511423 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:24:20.509591 oslogin_cache_refresh[1556]: Failure getting groups, quitting Jan 20 02:24:20.509618 oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:24:20.511919 extend-filesystems[1555]: Checking size of /dev/vda9 Jan 20 02:24:20.589993 extend-filesystems[1555]: Resized partition /dev/vda9 Jan 20 02:24:20.610640 extend-filesystems[1576]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 02:24:20.648407 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 02:24:20.764710 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 02:24:20.978412 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 02:24:21.050796 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 02:24:21.052599 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 02:24:21.058700 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 02:24:21.130698 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 02:24:21.165770 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 02:24:21.327581 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 02:24:21.340600 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 02:24:21.340600 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 02:24:21.340600 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 02:24:21.397365 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Jan 20 02:24:21.610887 update_engine[1585]: I20260120 02:24:21.591689 1585 main.cc:92] Flatcar Update Engine starting Jan 20 02:24:21.429408 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 02:24:21.611603 jq[1586]: true Jan 20 02:24:21.429944 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 02:24:21.430546 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 02:24:21.430877 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 02:24:21.476616 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 02:24:21.477059 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 02:24:21.516077 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 02:24:21.516757 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 02:24:21.530396 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 02:24:21.571948 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 02:24:21.574351 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 02:24:21.973844 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 02:24:22.148794 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 02:24:22.150162 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 02:24:22.275084 jq[1597]: true Jan 20 02:24:22.327338 tar[1596]: linux-amd64/LICENSE Jan 20 02:24:22.332608 tar[1596]: linux-amd64/helm Jan 20 02:24:22.392035 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 02:24:22.440387 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 02:24:24.215751 dbus-daemon[1552]: [system] SELinux support is enabled Jan 20 02:24:24.306090 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 02:24:24.328964 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 02:24:24.329005 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 02:24:24.432942 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 02:24:24.433247 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 02:24:24.446764 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 02:24:24.489871 update_engine[1585]: I20260120 02:24:24.489311 1585 update_check_scheduler.cc:74] Next update check in 4m2s Jan 20 02:24:24.490078 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 02:24:24.503589 systemd[1]: Started update-engine.service - Update Engine. Jan 20 02:24:24.544253 systemd-logind[1583]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 02:24:24.545303 systemd-logind[1583]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 02:24:24.547027 systemd-logind[1583]: New seat seat0. Jan 20 02:24:24.597739 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Jan 20 02:24:24.614795 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 02:24:24.776103 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 02:24:24.823926 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 02:24:24.982321 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 02:24:25.105100 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 02:24:25.106030 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 02:24:26.386615 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 02:24:28.088062 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 02:24:28.162994 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:54142.service - OpenSSH per-connection server daemon (10.0.0.1:54142). Jan 20 02:24:28.286743 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 02:24:28.364927 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 02:24:28.444406 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 02:24:28.481069 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 02:24:28.692694 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 02:24:29.226539 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 54142 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:24:29.249131 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:24:29.330040 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 02:24:29.348131 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 02:24:29.418737 containerd[1598]: time="2026-01-20T02:24:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 02:24:29.484604 containerd[1598]: time="2026-01-20T02:24:29.483274914Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 02:24:29.607368 systemd-logind[1583]: New session 1 of user core. Jan 20 02:24:29.720428 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 02:24:29.745725 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 02:24:30.799126 containerd[1598]: time="2026-01-20T02:24:30.795223945Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.64µs" Jan 20 02:24:30.809359 containerd[1598]: time="2026-01-20T02:24:30.803200744Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 02:24:30.809359 containerd[1598]: time="2026-01-20T02:24:30.803413401Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 02:24:30.809359 containerd[1598]: time="2026-01-20T02:24:30.804050600Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 02:24:30.809359 containerd[1598]: time="2026-01-20T02:24:30.804173590Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 02:24:30.813957 containerd[1598]: time="2026-01-20T02:24:30.813915032Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:24:30.816547 containerd[1598]: time="2026-01-20T02:24:30.814303557Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:24:30.816636 containerd[1598]: time="2026-01-20T02:24:30.816618028Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:24:30.819202 containerd[1598]: time="2026-01-20T02:24:30.819165924Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:24:30.820828 containerd[1598]: time="2026-01-20T02:24:30.820803661Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:24:30.820922 containerd[1598]: time="2026-01-20T02:24:30.820902685Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:24:30.821009 containerd[1598]: time="2026-01-20T02:24:30.820991040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 02:24:30.823070 containerd[1598]: time="2026-01-20T02:24:30.823043561Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 02:24:30.826150 containerd[1598]: time="2026-01-20T02:24:30.826123460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:24:30.827579 containerd[1598]: time="2026-01-20T02:24:30.827550975Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:24:30.827673 containerd[1598]: time="2026-01-20T02:24:30.827653215Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 02:24:30.827907 containerd[1598]: time="2026-01-20T02:24:30.827884106Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 02:24:30.829932 containerd[1598]: time="2026-01-20T02:24:30.829904377Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 02:24:30.830150 containerd[1598]: time="2026-01-20T02:24:30.830127764Z" level=info msg="metadata content store policy set" policy=shared Jan 20 02:24:30.896913 (systemd)[1667]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 02:24:30.918614 systemd-logind[1583]: New session c1 of user core. Jan 20 02:24:31.983595 containerd[1598]: time="2026-01-20T02:24:31.979426039Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 02:24:32.101970 containerd[1598]: time="2026-01-20T02:24:32.082821776Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 02:24:32.101970 containerd[1598]: time="2026-01-20T02:24:32.084375275Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 02:24:32.101970 containerd[1598]: time="2026-01-20T02:24:32.084490681Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 02:24:32.101970 containerd[1598]: time="2026-01-20T02:24:32.084521488Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 02:24:32.101970 containerd[1598]: time="2026-01-20T02:24:32.084641071Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 02:24:32.101970 containerd[1598]: time="2026-01-20T02:24:32.084983230Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 02:24:32.101970 containerd[1598]: time="2026-01-20T02:24:32.085121959Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 02:24:32.101970 containerd[1598]: time="2026-01-20T02:24:32.085144811Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 02:24:32.101970 containerd[1598]: time="2026-01-20T02:24:32.085160902Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 02:24:32.161962 containerd[1598]: time="2026-01-20T02:24:32.151760798Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 02:24:32.214064 containerd[1598]: time="2026-01-20T02:24:32.192122724Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 02:24:32.747064 containerd[1598]: time="2026-01-20T02:24:32.736342367Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 02:24:33.059188 containerd[1598]: time="2026-01-20T02:24:33.020141467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 02:24:33.086360 containerd[1598]: time="2026-01-20T02:24:33.082804265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 02:24:33.086360 containerd[1598]: time="2026-01-20T02:24:33.082968732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 02:24:33.086360 containerd[1598]: time="2026-01-20T02:24:33.083094747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 02:24:33.086360 containerd[1598]: time="2026-01-20T02:24:33.083117981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 02:24:33.154692 containerd[1598]: time="2026-01-20T02:24:33.083255738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 02:24:33.154692 containerd[1598]: time="2026-01-20T02:24:33.094226926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 02:24:33.154692 containerd[1598]: time="2026-01-20T02:24:33.094294282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 02:24:33.154692 containerd[1598]: time="2026-01-20T02:24:33.094315752Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 02:24:33.154692 containerd[1598]: time="2026-01-20T02:24:33.094336731Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 02:24:33.154692 containerd[1598]: time="2026-01-20T02:24:33.094787182Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 02:24:33.154692 containerd[1598]: time="2026-01-20T02:24:33.113495164Z" level=info msg="Start snapshots syncer" Jan 20 02:24:33.154692 containerd[1598]: time="2026-01-20T02:24:33.114875250Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 02:24:33.194106 containerd[1598]: time="2026-01-20T02:24:33.193318892Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 02:24:33.194106 containerd[1598]: time="2026-01-20T02:24:33.193743204Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 02:24:33.223880 containerd[1598]: time="2026-01-20T02:24:33.223808943Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 02:24:33.226035 containerd[1598]: time="2026-01-20T02:24:33.225997247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 02:24:33.226225 containerd[1598]: time="2026-01-20T02:24:33.226200747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 02:24:33.226377 containerd[1598]: time="2026-01-20T02:24:33.226353863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 02:24:33.318295 containerd[1598]: time="2026-01-20T02:24:33.296987027Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 02:24:33.318295 containerd[1598]: time="2026-01-20T02:24:33.297191448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 02:24:33.337033 containerd[1598]: time="2026-01-20T02:24:33.297256600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 02:24:33.343228 containerd[1598]: time="2026-01-20T02:24:33.343097123Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 02:24:33.344125 containerd[1598]: time="2026-01-20T02:24:33.344019314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.407896553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.408211982Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.504678271Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.504999751Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505022944Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505042831Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505058320Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505072407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505329025Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505416689Z" level=info msg="runtime interface created" Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505497350Z" level=info msg="created NRI interface" Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505546001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505601554Z" level=info msg="Connect containerd service" Jan 20 02:24:33.531206 containerd[1598]: time="2026-01-20T02:24:33.505808621Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 02:24:33.535579 containerd[1598]: time="2026-01-20T02:24:33.511724092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:24:36.100870 tar[1596]: linux-amd64/README.md Jan 20 02:24:37.015138 systemd[1667]: Queued start job for default target default.target. Jan 20 02:24:37.445046 systemd[1667]: Created slice app.slice - User Application Slice. Jan 20 02:24:37.447673 systemd[1667]: Reached target paths.target - Paths. Jan 20 02:24:37.447761 systemd[1667]: Reached target timers.target - Timers. Jan 20 02:24:37.484727 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 02:24:37.493208 systemd[1667]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 02:24:37.719759 kernel: kvm_amd: TSC scaling supported Jan 20 02:24:37.721226 kernel: kvm_amd: Nested Virtualization enabled Jan 20 02:24:37.721273 kernel: kvm_amd: Nested Paging enabled Jan 20 02:24:37.721674 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 02:24:37.730537 kernel: kvm_amd: PMU virtualization is disabled Jan 20 02:24:38.132641 systemd[1667]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 02:24:38.155141 systemd[1667]: Reached target sockets.target - Sockets. Jan 20 02:24:38.172162 systemd[1667]: Reached target basic.target - Basic System. Jan 20 02:24:38.176737 systemd[1667]: Reached target default.target - Main User Target. Jan 20 02:24:38.176839 systemd[1667]: Startup finished in 6.091s. Jan 20 02:24:38.182202 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 02:24:38.295092 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 02:24:38.800559 containerd[1598]: time="2026-01-20T02:24:38.800415846Z" level=info msg="Start subscribing containerd event" Jan 20 02:24:38.813131 containerd[1598]: time="2026-01-20T02:24:38.811615250Z" level=info msg="Start recovering state" Jan 20 02:24:38.824522 containerd[1598]: time="2026-01-20T02:24:38.816993950Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 02:24:38.824522 containerd[1598]: time="2026-01-20T02:24:38.817245439Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 02:24:38.826722 containerd[1598]: time="2026-01-20T02:24:38.826681422Z" level=info msg="Start event monitor" Jan 20 02:24:38.826932 containerd[1598]: time="2026-01-20T02:24:38.826911461Z" level=info msg="Start cni network conf syncer for default" Jan 20 02:24:38.836254 containerd[1598]: time="2026-01-20T02:24:38.827264971Z" level=info msg="Start streaming server" Jan 20 02:24:38.836254 containerd[1598]: time="2026-01-20T02:24:38.827288975Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 02:24:38.878916 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:53984.service - OpenSSH per-connection server daemon (10.0.0.1:53984). Jan 20 02:24:38.936899 containerd[1598]: time="2026-01-20T02:24:38.936846942Z" level=info msg="runtime interface starting up..." Jan 20 02:24:38.937125 containerd[1598]: time="2026-01-20T02:24:38.937098221Z" level=info msg="starting plugins..." Jan 20 02:24:38.937240 containerd[1598]: time="2026-01-20T02:24:38.937212945Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 02:24:38.939901 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 02:24:38.950818 containerd[1598]: time="2026-01-20T02:24:38.950769388Z" level=info msg="containerd successfully booted in 9.539149s" Jan 20 02:24:39.317219 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 53984 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:24:39.338763 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:24:39.395886 systemd-logind[1583]: New session 2 of user core. Jan 20 02:24:39.508067 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 02:24:40.394556 sshd[1703]: Connection closed by 10.0.0.1 port 53984 Jan 20 02:24:40.396113 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jan 20 02:24:41.486410 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:53984.service: Deactivated successfully. Jan 20 02:24:41.676222 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 02:24:41.714149 systemd-logind[1583]: Session 2 logged out. Waiting for processes to exit. Jan 20 02:24:41.783280 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:54004.service - OpenSSH per-connection server daemon (10.0.0.1:54004). Jan 20 02:24:41.837041 systemd-logind[1583]: Removed session 2. Jan 20 02:24:43.779998 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 54004 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:24:43.795498 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:24:43.940753 systemd-logind[1583]: New session 3 of user core. Jan 20 02:24:44.088162 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 02:24:44.850392 sshd[1712]: Connection closed by 10.0.0.1 port 54004 Jan 20 02:24:44.855095 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jan 20 02:24:44.924580 systemd-logind[1583]: Session 3 logged out. Waiting for processes to exit. Jan 20 02:24:44.927166 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:54004.service: Deactivated successfully. Jan 20 02:24:44.956932 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 02:24:44.994248 systemd-logind[1583]: Removed session 3. Jan 20 02:24:48.337655 kernel: EDAC MC: Ver: 3.0.0 Jan 20 02:24:50.748861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:24:50.765895 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 02:24:50.775379 systemd[1]: Startup finished in 12.654s (kernel) + 43.051s (initrd) + 1min 6.852s (userspace) = 2min 2.559s. Jan 20 02:24:50.855397 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:24:54.948138 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:37152.service - OpenSSH per-connection server daemon (10.0.0.1:37152). Jan 20 02:24:54.963591 kubelet[1722]: E0120 02:24:54.960955 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:24:54.980149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:24:54.980424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:24:55.009057 systemd[1]: kubelet.service: Consumed 6.467s CPU time, 271.4M memory peak. Jan 20 02:24:55.338349 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 37152 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:24:55.345324 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:24:55.402715 systemd-logind[1583]: New session 4 of user core. Jan 20 02:24:55.430886 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 02:24:55.613923 sshd[1736]: Connection closed by 10.0.0.1 port 37152 Jan 20 02:24:55.618207 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jan 20 02:24:55.655231 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:37152.service: Deactivated successfully. Jan 20 02:24:55.668934 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 02:24:55.679508 systemd-logind[1583]: Session 4 logged out. Waiting for processes to exit. Jan 20 02:24:55.694313 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:37154.service - OpenSSH per-connection server daemon (10.0.0.1:37154). Jan 20 02:24:55.704584 systemd-logind[1583]: Removed session 4. Jan 20 02:24:55.980536 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 37154 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:24:56.001866 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:24:56.051414 systemd-logind[1583]: New session 5 of user core. Jan 20 02:24:56.078648 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 02:24:56.213157 sshd[1745]: Connection closed by 10.0.0.1 port 37154 Jan 20 02:24:56.215886 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 20 02:24:56.260094 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:37170.service - OpenSSH per-connection server daemon (10.0.0.1:37170). Jan 20 02:24:56.269964 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:37154.service: Deactivated successfully. Jan 20 02:24:56.286726 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 02:24:56.297020 systemd-logind[1583]: Session 5 logged out. Waiting for processes to exit. Jan 20 02:24:56.319026 systemd-logind[1583]: Removed session 5. Jan 20 02:24:56.630050 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 37170 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:24:56.666332 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:24:56.765221 systemd-logind[1583]: New session 6 of user core. Jan 20 02:24:56.793638 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 02:24:57.135929 sshd[1754]: Connection closed by 10.0.0.1 port 37170 Jan 20 02:24:57.137911 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jan 20 02:24:57.217394 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:37170.service: Deactivated successfully. Jan 20 02:24:57.239766 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 02:24:57.262851 systemd-logind[1583]: Session 6 logged out. Waiting for processes to exit. Jan 20 02:24:57.276282 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:37192.service - OpenSSH per-connection server daemon (10.0.0.1:37192). Jan 20 02:24:57.284321 systemd-logind[1583]: Removed session 6. Jan 20 02:24:57.546157 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 37192 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:24:57.559229 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:24:57.610559 systemd-logind[1583]: New session 7 of user core. Jan 20 02:24:57.647843 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 02:24:57.910378 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 02:24:57.918174 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:25:00.078479 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 02:25:00.117933 (dockerd)[1784]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 02:25:02.349216 dockerd[1784]: time="2026-01-20T02:25:02.342640033Z" level=info msg="Starting up" Jan 20 02:25:02.353834 dockerd[1784]: time="2026-01-20T02:25:02.352636607Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 02:25:02.441869 dockerd[1784]: time="2026-01-20T02:25:02.441615481Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 02:25:02.828120 dockerd[1784]: time="2026-01-20T02:25:02.826673049Z" level=info msg="Loading containers: start." Jan 20 02:25:02.884414 kernel: Initializing XFRM netlink socket Jan 20 02:25:04.752397 systemd-networkd[1486]: docker0: Link UP Jan 20 02:25:04.795057 dockerd[1784]: time="2026-01-20T02:25:04.791993771Z" level=info msg="Loading containers: done." Jan 20 02:25:04.928396 dockerd[1784]: time="2026-01-20T02:25:04.924359815Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 02:25:04.928396 dockerd[1784]: time="2026-01-20T02:25:04.928838437Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 02:25:04.928396 dockerd[1784]: time="2026-01-20T02:25:04.929016975Z" level=info msg="Initializing buildkit" Jan 20 02:25:05.056164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 02:25:05.064664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:25:05.070258 dockerd[1784]: time="2026-01-20T02:25:05.070011881Z" level=info msg="Completed buildkit initialization" Jan 20 02:25:05.082147 dockerd[1784]: time="2026-01-20T02:25:05.080799861Z" level=info msg="Daemon has completed initialization" Jan 20 02:25:05.081723 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 02:25:05.089527 dockerd[1784]: time="2026-01-20T02:25:05.088539765Z" level=info msg="API listen on /run/docker.sock" Jan 20 02:25:10.067601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:25:10.124207 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:25:10.220010 update_engine[1585]: I20260120 02:25:10.218621 1585 update_attempter.cc:509] Updating boot flags... Jan 20 02:25:10.897650 kubelet[2005]: E0120 02:25:10.894838 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:25:11.720281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:25:11.720683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:25:11.730380 systemd[1]: kubelet.service: Consumed 913ms CPU time, 110.6M memory peak. Jan 20 02:25:17.586049 containerd[1598]: time="2026-01-20T02:25:17.584664003Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 02:25:19.671078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount305922760.mount: Deactivated successfully. Jan 20 02:25:21.806136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 02:25:21.834090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:25:25.602526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:25:25.675616 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:25:26.913532 kubelet[2098]: E0120 02:25:26.912827 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:25:26.932505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:25:26.933265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:25:26.934943 systemd[1]: kubelet.service: Consumed 2.079s CPU time, 108.7M memory peak. Jan 20 02:25:37.067218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 02:25:37.105043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:25:37.171805 containerd[1598]: time="2026-01-20T02:25:37.167931643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:25:37.180253 containerd[1598]: time="2026-01-20T02:25:37.180198298Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 20 02:25:37.186623 containerd[1598]: time="2026-01-20T02:25:37.185104041Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:25:37.205632 containerd[1598]: time="2026-01-20T02:25:37.205542830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:25:37.212077 containerd[1598]: time="2026-01-20T02:25:37.211777153Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 19.626945855s" Jan 20 02:25:37.212077 containerd[1598]: time="2026-01-20T02:25:37.211833847Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 02:25:37.245608 containerd[1598]: time="2026-01-20T02:25:37.245103707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 02:25:43.176751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:25:43.224023 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:25:43.961977 kubelet[2114]: E0120 02:25:43.949057 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:25:43.982520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:25:43.983818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:25:43.984825 systemd[1]: kubelet.service: Consumed 4.009s CPU time, 110.8M memory peak. Jan 20 02:25:54.114049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 02:25:54.205502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:25:55.644718 containerd[1598]: time="2026-01-20T02:25:55.642844888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:25:55.655312 containerd[1598]: time="2026-01-20T02:25:55.655061880Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 20 02:25:55.661182 containerd[1598]: time="2026-01-20T02:25:55.658676592Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:25:55.671202 containerd[1598]: time="2026-01-20T02:25:55.668413919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:25:55.674333 containerd[1598]: time="2026-01-20T02:25:55.672766426Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 18.424459783s" Jan 20 02:25:55.674333 containerd[1598]: time="2026-01-20T02:25:55.672807622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 02:25:55.681537 containerd[1598]: time="2026-01-20T02:25:55.679403348Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 02:25:56.290256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:25:56.382962 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:25:56.958282 kubelet[2134]: E0120 02:25:56.956769 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:25:56.990778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:25:56.991031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:25:56.995712 systemd[1]: kubelet.service: Consumed 796ms CPU time, 109M memory peak. Jan 20 02:26:06.656303 containerd[1598]: time="2026-01-20T02:26:06.655549390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:26:06.667501 containerd[1598]: time="2026-01-20T02:26:06.667292694Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 20 02:26:06.682048 containerd[1598]: time="2026-01-20T02:26:06.680026695Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:26:06.701748 containerd[1598]: time="2026-01-20T02:26:06.700571225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:26:06.707309 containerd[1598]: time="2026-01-20T02:26:06.707041861Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 11.027506211s" Jan 20 02:26:06.707309 containerd[1598]: time="2026-01-20T02:26:06.707129352Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 02:26:06.733106 containerd[1598]: time="2026-01-20T02:26:06.726505388Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 02:26:07.069272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 02:26:07.079259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:26:08.913232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:26:09.003037 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:26:09.924536 kubelet[2156]: E0120 02:26:09.924260 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:26:09.943198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:26:09.943508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:26:09.948194 systemd[1]: kubelet.service: Consumed 924ms CPU time, 109.7M memory peak. Jan 20 02:26:13.865601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034354889.mount: Deactivated successfully. Jan 20 02:26:20.076587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 02:26:20.097117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:26:21.896810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:26:21.946296 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:26:22.514424 containerd[1598]: time="2026-01-20T02:26:22.512127156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:26:22.522081 containerd[1598]: time="2026-01-20T02:26:22.521959307Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 20 02:26:22.537908 containerd[1598]: time="2026-01-20T02:26:22.533556204Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:26:22.562036 containerd[1598]: time="2026-01-20T02:26:22.558749946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:26:22.562036 containerd[1598]: time="2026-01-20T02:26:22.559735651Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 15.833172817s" Jan 20 02:26:22.562036 containerd[1598]: time="2026-01-20T02:26:22.559777720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 02:26:22.562705 containerd[1598]: time="2026-01-20T02:26:22.562643847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 02:26:22.704610 kubelet[2180]: E0120 02:26:22.704265 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:26:22.889013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:26:22.894252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:26:22.907159 systemd[1]: kubelet.service: Consumed 772ms CPU time, 110.3M memory peak. Jan 20 02:26:24.276885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2278265136.mount: Deactivated successfully. Jan 20 02:26:33.177037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 02:26:33.217575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:26:36.442905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:26:36.522343 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:26:36.933689 containerd[1598]: time="2026-01-20T02:26:36.933379019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:26:36.938548 containerd[1598]: time="2026-01-20T02:26:36.936245951Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 20 02:26:36.947896 containerd[1598]: time="2026-01-20T02:26:36.945628585Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:26:36.964092 containerd[1598]: time="2026-01-20T02:26:36.963698694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:26:36.971733 containerd[1598]: time="2026-01-20T02:26:36.968569626Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 14.405721899s" Jan 20 02:26:36.971733 containerd[1598]: time="2026-01-20T02:26:36.971122804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 02:26:36.981521 containerd[1598]: time="2026-01-20T02:26:36.981168380Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 02:26:37.339122 kubelet[2250]: E0120 02:26:37.339020 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:26:37.370889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:26:37.371801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:26:37.374699 systemd[1]: kubelet.service: Consumed 1.062s CPU time, 110.7M memory peak. Jan 20 02:26:38.262717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602282354.mount: Deactivated successfully. Jan 20 02:26:38.381771 containerd[1598]: time="2026-01-20T02:26:38.379249742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:26:38.399271 containerd[1598]: time="2026-01-20T02:26:38.393102120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 02:26:38.410156 containerd[1598]: time="2026-01-20T02:26:38.407715869Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:26:38.430532 containerd[1598]: time="2026-01-20T02:26:38.425350358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:26:38.436368 containerd[1598]: time="2026-01-20T02:26:38.431554245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.450337025s" Jan 20 02:26:38.436368 containerd[1598]: time="2026-01-20T02:26:38.431775807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 02:26:38.445502 containerd[1598]: time="2026-01-20T02:26:38.442876837Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 02:26:40.681403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011833357.mount: Deactivated successfully. Jan 20 02:26:47.645365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 02:26:47.706106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:26:49.222102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:26:49.291734 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:26:50.888424 kubelet[2323]: E0120 02:26:50.876089 2323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:26:50.935814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:26:50.944314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:26:50.958810 systemd[1]: kubelet.service: Consumed 879ms CPU time, 108.6M memory peak. Jan 20 02:27:01.069668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 02:27:01.092021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:03.456044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:03.508105 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:27:04.610910 kubelet[2338]: E0120 02:27:04.608728 2338 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:27:04.634122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:27:04.634491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:27:04.638732 systemd[1]: kubelet.service: Consumed 912ms CPU time, 110.3M memory peak. Jan 20 02:27:07.524333 containerd[1598]: time="2026-01-20T02:27:07.523385056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:27:07.533120 containerd[1598]: time="2026-01-20T02:27:07.530427739Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 20 02:27:07.543652 containerd[1598]: time="2026-01-20T02:27:07.539077602Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:27:07.570693 containerd[1598]: time="2026-01-20T02:27:07.567266257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:27:07.830765 containerd[1598]: time="2026-01-20T02:27:07.822981538Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 29.379856511s" Jan 20 02:27:07.830765 containerd[1598]: time="2026-01-20T02:27:07.823400589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 02:27:14.851166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 02:27:14.885004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:16.773290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:16.828082 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:27:17.584029 kubelet[2384]: E0120 02:27:17.580783 2384 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:27:17.595174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:27:17.595512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:27:17.598716 systemd[1]: kubelet.service: Consumed 704ms CPU time, 110.4M memory peak. Jan 20 02:27:27.819909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 02:27:27.869846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:29.318194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:29.387181 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:27:30.297832 kubelet[2400]: E0120 02:27:30.294368 2400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:27:30.324814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:27:30.325867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:27:30.328121 systemd[1]: kubelet.service: Consumed 749ms CPU time, 110.6M memory peak. Jan 20 02:27:35.581014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:35.581338 systemd[1]: kubelet.service: Consumed 749ms CPU time, 110.6M memory peak. Jan 20 02:27:35.622254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:35.997028 systemd[1]: Reload requested from client PID 2416 ('systemctl') (unit session-7.scope)... Jan 20 02:27:35.997049 systemd[1]: Reloading... Jan 20 02:27:36.853600 zram_generator::config[2459]: No configuration found. Jan 20 02:27:39.042491 systemd[1]: Reloading finished in 3042 ms. Jan 20 02:27:39.669300 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:39.680887 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:27:39.681298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:39.681381 systemd[1]: kubelet.service: Consumed 385ms CPU time, 98.3M memory peak. Jan 20 02:27:39.708105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:27:40.762894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:27:40.843355 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:27:41.328045 kubelet[2509]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:27:41.328045 kubelet[2509]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:27:41.328045 kubelet[2509]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:27:41.342662 kubelet[2509]: I0120 02:27:41.326846 2509 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:27:42.938958 kubelet[2509]: I0120 02:27:42.934015 2509 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 02:27:42.938958 kubelet[2509]: I0120 02:27:42.934073 2509 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:27:42.938958 kubelet[2509]: I0120 02:27:42.934379 2509 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 02:27:43.330727 kubelet[2509]: E0120 02:27:43.329027 2509 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:27:43.359527 kubelet[2509]: I0120 02:27:43.346014 2509 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:27:43.519499 kubelet[2509]: I0120 02:27:43.511833 2509 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:27:43.573683 kubelet[2509]: I0120 02:27:43.571898 2509 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:27:43.573683 kubelet[2509]: I0120 02:27:43.572523 2509 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:27:43.573683 kubelet[2509]: I0120 02:27:43.572623 2509 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:27:43.573683 kubelet[2509]: I0120 02:27:43.572833 2509 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:27:43.574307 kubelet[2509]: I0120 02:27:43.572844 2509 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 02:27:43.574307 kubelet[2509]: I0120 02:27:43.573407 2509 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:27:43.607877 kubelet[2509]: I0120 02:27:43.589235 2509 kubelet.go:480] "Attempting to sync node with API server" Jan 20 02:27:43.607877 kubelet[2509]: I0120 02:27:43.602365 2509 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:27:43.607877 kubelet[2509]: I0120 02:27:43.602557 2509 kubelet.go:386] "Adding apiserver pod source" Jan 20 02:27:43.607877 kubelet[2509]: I0120 02:27:43.605511 2509 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:27:43.675777 kubelet[2509]: E0120 02:27:43.669525 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:27:43.675777 kubelet[2509]: E0120 02:27:43.672049 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:27:43.698918 kubelet[2509]: I0120 02:27:43.696548 2509 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 02:27:43.698918 kubelet[2509]: I0120 02:27:43.697423 2509 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 02:27:43.706185 kubelet[2509]: W0120 02:27:43.703296 2509 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 02:27:43.817799 kubelet[2509]: I0120 02:27:43.814616 2509 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:27:43.817799 kubelet[2509]: I0120 02:27:43.814829 2509 server.go:1289] "Started kubelet" Jan 20 02:27:43.817799 kubelet[2509]: I0120 02:27:43.815541 2509 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:27:43.817799 kubelet[2509]: I0120 02:27:43.815488 2509 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:27:43.817799 kubelet[2509]: I0120 02:27:43.816261 2509 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:27:43.817799 kubelet[2509]: I0120 02:27:43.817138 2509 server.go:317] "Adding debug handlers to kubelet server" Jan 20 02:27:43.828866 kubelet[2509]: I0120 02:27:43.827412 2509 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:27:43.840734 kubelet[2509]: I0120 02:27:43.837820 2509 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:27:43.850130 kubelet[2509]: I0120 02:27:43.845866 2509 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:27:43.850130 kubelet[2509]: E0120 02:27:43.846016 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:43.850130 kubelet[2509]: I0120 02:27:43.847292 2509 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:27:43.850130 kubelet[2509]: I0120 02:27:43.847359 2509 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:27:43.850130 kubelet[2509]: E0120 02:27:43.848118 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:27:43.850130 kubelet[2509]: E0120 02:27:43.848184 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Jan 20 02:27:43.850130 kubelet[2509]: I0120 02:27:43.848743 2509 factory.go:223] Registration of the systemd container factory successfully Jan 20 02:27:43.850130 kubelet[2509]: I0120 02:27:43.848832 2509 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:27:43.870729 kubelet[2509]: E0120 02:27:43.868866 2509 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:27:43.870729 kubelet[2509]: I0120 02:27:43.868998 2509 factory.go:223] Registration of the containerd container factory successfully Jan 20 02:27:43.887756 kubelet[2509]: E0120 02:27:43.865678 2509 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4f687b5e4510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:27:43.814731024 +0000 UTC m=+2.937505022,LastTimestamp:2026-01-20 02:27:43.814731024 +0000 UTC m=+2.937505022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:27:43.972180 kubelet[2509]: E0120 02:27:43.971734 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:44.001887 kubelet[2509]: I0120 02:27:44.001361 2509 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 02:27:44.041624 kubelet[2509]: I0120 02:27:44.041215 2509 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:27:44.041624 kubelet[2509]: I0120 02:27:44.041243 2509 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:27:44.041624 kubelet[2509]: I0120 02:27:44.041358 2509 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:27:44.052902 kubelet[2509]: E0120 02:27:44.051332 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Jan 20 02:27:44.064852 kubelet[2509]: I0120 02:27:44.064773 2509 policy_none.go:49] "None policy: Start" Jan 20 02:27:44.064852 kubelet[2509]: I0120 02:27:44.064840 2509 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:27:44.064852 kubelet[2509]: I0120 02:27:44.064861 2509 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:27:44.073088 kubelet[2509]: E0120 02:27:44.072793 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:44.107217 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 02:27:44.180762 kubelet[2509]: E0120 02:27:44.179998 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:44.316166 kubelet[2509]: E0120 02:27:44.305305 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:44.466182 kubelet[2509]: E0120 02:27:44.451967 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:44.467955 kubelet[2509]: E0120 02:27:44.467828 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Jan 20 02:27:44.631170 kubelet[2509]: E0120 02:27:44.610980 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:44.631170 kubelet[2509]: I0120 02:27:44.676117 2509 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 02:27:44.631170 kubelet[2509]: I0120 02:27:44.688483 2509 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 02:27:44.803219 kubelet[2509]: I0120 02:27:44.768901 2509 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:27:44.804068 kubelet[2509]: I0120 02:27:44.803846 2509 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 02:27:44.804536 kubelet[2509]: E0120 02:27:44.804262 2509 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:27:44.982168 kubelet[2509]: E0120 02:27:44.841549 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:27:44.982168 kubelet[2509]: E0120 02:27:44.736277 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:44.982168 kubelet[2509]: E0120 02:27:44.898087 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:27:44.982168 kubelet[2509]: E0120 02:27:44.911088 2509 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:27:44.982168 kubelet[2509]: E0120 02:27:44.919875 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:27:44.982168 kubelet[2509]: E0120 02:27:44.978950 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:44.934908 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 02:27:45.028196 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 02:27:45.081923 kubelet[2509]: E0120 02:27:45.080703 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:45.114568 kubelet[2509]: E0120 02:27:45.114272 2509 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:27:45.189215 kubelet[2509]: E0120 02:27:45.185856 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:45.243499 kubelet[2509]: E0120 02:27:45.241868 2509 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 02:27:45.246133 kubelet[2509]: I0120 02:27:45.245037 2509 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:27:45.246133 kubelet[2509]: I0120 02:27:45.245092 2509 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:27:45.321534 kubelet[2509]: E0120 02:27:45.297694 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:45.422733 kubelet[2509]: I0120 02:27:45.326805 2509 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:27:45.427317 kubelet[2509]: E0120 02:27:45.427282 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:27:45.427679 kubelet[2509]: E0120 02:27:45.427649 2509 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:27:45.427898 kubelet[2509]: E0120 02:27:45.427873 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" Jan 20 02:27:45.506295 kubelet[2509]: E0120 02:27:45.501362 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:45.590425 kubelet[2509]: E0120 02:27:45.532147 2509 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:27:45.778044 kubelet[2509]: E0120 02:27:45.708545 2509 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:27:45.876223 kubelet[2509]: E0120 02:27:45.736908 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:27:45.901939 kubelet[2509]: I0120 02:27:45.783080 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:27:46.504906 kubelet[2509]: I0120 02:27:46.503135 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:27:46.944086 kubelet[2509]: I0120 02:27:46.889489 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6310083ba1b0d7f846373d05315aa17b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6310083ba1b0d7f846373d05315aa17b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:27:47.078846 kubelet[2509]: I0120 02:27:46.976341 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6310083ba1b0d7f846373d05315aa17b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6310083ba1b0d7f846373d05315aa17b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:27:47.078846 kubelet[2509]: I0120 02:27:46.977864 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6310083ba1b0d7f846373d05315aa17b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6310083ba1b0d7f846373d05315aa17b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:27:47.078846 kubelet[2509]: E0120 02:27:46.940733 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 20 02:27:47.078846 kubelet[2509]: E0120 02:27:46.941582 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:27:47.126268 kubelet[2509]: E0120 02:27:47.081986 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:27:47.131081 kubelet[2509]: E0120 02:27:47.130932 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="3.2s" Jan 20 02:27:47.315215 kubelet[2509]: I0120 02:27:47.279391 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:27:47.315215 kubelet[2509]: I0120 02:27:47.290822 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:27:47.315215 kubelet[2509]: I0120 02:27:47.291124 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:27:47.315215 kubelet[2509]: I0120 02:27:47.291208 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:27:47.315215 kubelet[2509]: I0120 02:27:47.291238 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:27:47.315215 kubelet[2509]: I0120 02:27:47.291265 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:27:47.401392 kubelet[2509]: E0120 02:27:47.398128 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 20 02:27:47.670044 kubelet[2509]: E0120 02:27:47.639758 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:27:48.646028 kubelet[2509]: I0120 02:27:48.603898 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:27:48.646028 kubelet[2509]: E0120 02:27:48.611246 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 20 02:27:48.677005 kubelet[2509]: E0120 02:27:48.676918 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:27:48.693939 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 20 02:27:48.799810 kubelet[2509]: E0120 02:27:48.796420 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:27:48.876744 containerd[1598]: time="2026-01-20T02:27:48.874784491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 20 02:27:48.907521 systemd[1]: Created slice kubepods-burstable-pod6310083ba1b0d7f846373d05315aa17b.slice - libcontainer container kubepods-burstable-pod6310083ba1b0d7f846373d05315aa17b.slice. Jan 20 02:27:49.186917 kubelet[2509]: E0120 02:27:49.185317 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:27:49.200183 containerd[1598]: time="2026-01-20T02:27:49.198260200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6310083ba1b0d7f846373d05315aa17b,Namespace:kube-system,Attempt:0,}" Jan 20 02:27:49.217184 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 20 02:27:49.238352 kubelet[2509]: E0120 02:27:49.237788 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:27:49.240817 containerd[1598]: time="2026-01-20T02:27:49.239265280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 20 02:27:49.424843 kubelet[2509]: I0120 02:27:49.421381 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:27:49.426496 kubelet[2509]: E0120 02:27:49.425698 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 20 02:27:49.588189 containerd[1598]: time="2026-01-20T02:27:49.588124128Z" level=info msg="connecting to shim 4db5d2e2a3611e58826ce2199e7562ebed71c30806983c6fcd368de7dd8471e5" address="unix:///run/containerd/s/b517ae368839d07c26859540a13430e6485d562149f9ca1c51cad63d09640369" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:27:49.625864 kubelet[2509]: E0120 02:27:49.624852 2509 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:27:49.710485 containerd[1598]: time="2026-01-20T02:27:49.704533285Z" level=info msg="connecting to shim e65765ca8fc636e317b2a7ce39254d601adcaa1ad72538c8936f098ae40d3b97" address="unix:///run/containerd/s/103de354f03a8f160eb591664d2914d412d19707022ce5aadfdd1f37c0c26c80" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:27:49.780759 containerd[1598]: time="2026-01-20T02:27:49.780675979Z" level=info msg="connecting to shim fde323a45d8139473ef07f634aa84a7af73267c93abaa521494a47fc041e01ef" address="unix:///run/containerd/s/09a3598492cb50afc807f3fed3aea2c0bc2571114355d01bc71e8429f9076032" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:27:49.989826 kubelet[2509]: E0120 02:27:49.989558 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:27:52.001830 kubelet[2509]: E0120 02:27:51.992345 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="6.4s" Jan 20 02:27:52.008182 kubelet[2509]: I0120 02:27:52.005842 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:27:52.012199 kubelet[2509]: E0120 02:27:52.012147 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 20 02:27:52.025910 systemd[1]: Started cri-containerd-e65765ca8fc636e317b2a7ce39254d601adcaa1ad72538c8936f098ae40d3b97.scope - libcontainer container e65765ca8fc636e317b2a7ce39254d601adcaa1ad72538c8936f098ae40d3b97. Jan 20 02:27:52.077422 systemd[1]: Started cri-containerd-fde323a45d8139473ef07f634aa84a7af73267c93abaa521494a47fc041e01ef.scope - libcontainer container fde323a45d8139473ef07f634aa84a7af73267c93abaa521494a47fc041e01ef. Jan 20 02:27:52.169165 kubelet[2509]: E0120 02:27:52.163122 2509 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4f687b5e4510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:27:43.814731024 +0000 UTC m=+2.937505022,LastTimestamp:2026-01-20 02:27:43.814731024 +0000 UTC m=+2.937505022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:27:52.194064 systemd[1]: Started cri-containerd-4db5d2e2a3611e58826ce2199e7562ebed71c30806983c6fcd368de7dd8471e5.scope - libcontainer container 4db5d2e2a3611e58826ce2199e7562ebed71c30806983c6fcd368de7dd8471e5. Jan 20 02:27:53.091084 kubelet[2509]: E0120 02:27:53.090977 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:27:53.205538 kubelet[2509]: E0120 02:27:53.204369 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:27:53.222748 containerd[1598]: time="2026-01-20T02:27:53.213423319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e65765ca8fc636e317b2a7ce39254d601adcaa1ad72538c8936f098ae40d3b97\"" Jan 20 02:27:53.285699 containerd[1598]: time="2026-01-20T02:27:53.282560951Z" level=info msg="CreateContainer within sandbox \"e65765ca8fc636e317b2a7ce39254d601adcaa1ad72538c8936f098ae40d3b97\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 02:27:53.614507 containerd[1598]: time="2026-01-20T02:27:53.613709391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6310083ba1b0d7f846373d05315aa17b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fde323a45d8139473ef07f634aa84a7af73267c93abaa521494a47fc041e01ef\"" Jan 20 02:27:53.623816 kubelet[2509]: E0120 02:27:53.623758 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:27:53.658576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767802055.mount: Deactivated successfully. Jan 20 02:27:53.727193 containerd[1598]: time="2026-01-20T02:27:53.713177924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4db5d2e2a3611e58826ce2199e7562ebed71c30806983c6fcd368de7dd8471e5\"" Jan 20 02:27:53.727193 containerd[1598]: time="2026-01-20T02:27:53.724234216Z" level=info msg="Container 29ae5e27f92c218d13797c2aa357dad54c6049f4fedb72fbe4a971b15dd6af72: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:27:53.716980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461967566.mount: Deactivated successfully. Jan 20 02:27:53.740107 containerd[1598]: time="2026-01-20T02:27:53.740047480Z" level=info msg="CreateContainer within sandbox \"fde323a45d8139473ef07f634aa84a7af73267c93abaa521494a47fc041e01ef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 02:27:53.765795 containerd[1598]: time="2026-01-20T02:27:53.764282068Z" level=info msg="CreateContainer within sandbox \"4db5d2e2a3611e58826ce2199e7562ebed71c30806983c6fcd368de7dd8471e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 02:27:53.818420 containerd[1598]: time="2026-01-20T02:27:53.816332374Z" level=info msg="CreateContainer within sandbox \"e65765ca8fc636e317b2a7ce39254d601adcaa1ad72538c8936f098ae40d3b97\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"29ae5e27f92c218d13797c2aa357dad54c6049f4fedb72fbe4a971b15dd6af72\"" Jan 20 02:27:53.823515 containerd[1598]: time="2026-01-20T02:27:53.820576630Z" level=info msg="StartContainer for \"29ae5e27f92c218d13797c2aa357dad54c6049f4fedb72fbe4a971b15dd6af72\"" Jan 20 02:27:53.832404 containerd[1598]: time="2026-01-20T02:27:53.831991820Z" level=info msg="connecting to shim 29ae5e27f92c218d13797c2aa357dad54c6049f4fedb72fbe4a971b15dd6af72" address="unix:///run/containerd/s/103de354f03a8f160eb591664d2914d412d19707022ce5aadfdd1f37c0c26c80" protocol=ttrpc version=3 Jan 20 02:27:53.896691 containerd[1598]: time="2026-01-20T02:27:53.888421564Z" level=info msg="Container 244afda0cafd29b0723f9c2d0c810493eb91ac06cb6f4ad2c7a63e21bf658e1f: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:27:54.001008 containerd[1598]: time="2026-01-20T02:27:53.999281866Z" level=info msg="Container b79845d8f6f3106ee59e663014f88b2c9f6c79888b8e7d4210740dbc1d79a29d: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:27:54.121500 containerd[1598]: time="2026-01-20T02:27:54.121302327Z" level=info msg="CreateContainer within sandbox \"4db5d2e2a3611e58826ce2199e7562ebed71c30806983c6fcd368de7dd8471e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b79845d8f6f3106ee59e663014f88b2c9f6c79888b8e7d4210740dbc1d79a29d\"" Jan 20 02:27:54.133166 containerd[1598]: time="2026-01-20T02:27:54.130341063Z" level=info msg="StartContainer for \"b79845d8f6f3106ee59e663014f88b2c9f6c79888b8e7d4210740dbc1d79a29d\"" Jan 20 02:27:54.133166 containerd[1598]: time="2026-01-20T02:27:54.131943267Z" level=info msg="connecting to shim b79845d8f6f3106ee59e663014f88b2c9f6c79888b8e7d4210740dbc1d79a29d" address="unix:///run/containerd/s/b517ae368839d07c26859540a13430e6485d562149f9ca1c51cad63d09640369" protocol=ttrpc version=3 Jan 20 02:27:54.143239 containerd[1598]: time="2026-01-20T02:27:54.140768036Z" level=info msg="CreateContainer within sandbox \"fde323a45d8139473ef07f634aa84a7af73267c93abaa521494a47fc041e01ef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"244afda0cafd29b0723f9c2d0c810493eb91ac06cb6f4ad2c7a63e21bf658e1f\"" Jan 20 02:27:54.146004 containerd[1598]: time="2026-01-20T02:27:54.145829331Z" level=info msg="StartContainer for \"244afda0cafd29b0723f9c2d0c810493eb91ac06cb6f4ad2c7a63e21bf658e1f\"" Jan 20 02:27:54.148008 containerd[1598]: time="2026-01-20T02:27:54.147917641Z" level=info msg="connecting to shim 244afda0cafd29b0723f9c2d0c810493eb91ac06cb6f4ad2c7a63e21bf658e1f" address="unix:///run/containerd/s/09a3598492cb50afc807f3fed3aea2c0bc2571114355d01bc71e8429f9076032" protocol=ttrpc version=3 Jan 20 02:27:54.174873 systemd[1]: Started cri-containerd-29ae5e27f92c218d13797c2aa357dad54c6049f4fedb72fbe4a971b15dd6af72.scope - libcontainer container 29ae5e27f92c218d13797c2aa357dad54c6049f4fedb72fbe4a971b15dd6af72. Jan 20 02:27:54.729772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79603307.mount: Deactivated successfully. Jan 20 02:27:54.835195 kubelet[2509]: E0120 02:27:54.835060 2509 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:27:54.958736 systemd[1]: Started cri-containerd-244afda0cafd29b0723f9c2d0c810493eb91ac06cb6f4ad2c7a63e21bf658e1f.scope - libcontainer container 244afda0cafd29b0723f9c2d0c810493eb91ac06cb6f4ad2c7a63e21bf658e1f. Jan 20 02:27:55.169211 systemd[1]: Started cri-containerd-b79845d8f6f3106ee59e663014f88b2c9f6c79888b8e7d4210740dbc1d79a29d.scope - libcontainer container b79845d8f6f3106ee59e663014f88b2c9f6c79888b8e7d4210740dbc1d79a29d. Jan 20 02:27:55.264500 kubelet[2509]: I0120 02:27:55.263769 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:27:55.313716 kubelet[2509]: E0120 02:27:55.271939 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jan 20 02:27:55.631091 containerd[1598]: time="2026-01-20T02:27:55.631003231Z" level=info msg="StartContainer for \"29ae5e27f92c218d13797c2aa357dad54c6049f4fedb72fbe4a971b15dd6af72\" returns successfully" Jan 20 02:27:55.746098 kubelet[2509]: E0120 02:27:55.745031 2509 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:27:55.946344 containerd[1598]: time="2026-01-20T02:27:55.930404641Z" level=info msg="StartContainer for \"244afda0cafd29b0723f9c2d0c810493eb91ac06cb6f4ad2c7a63e21bf658e1f\" returns successfully" Jan 20 02:27:56.632142 containerd[1598]: time="2026-01-20T02:27:56.631286882Z" level=info msg="StartContainer for \"b79845d8f6f3106ee59e663014f88b2c9f6c79888b8e7d4210740dbc1d79a29d\" returns successfully" Jan 20 02:27:56.979795 kubelet[2509]: E0120 02:27:56.974295 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:27:58.398923 kubelet[2509]: E0120 02:27:58.398778 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="7s" Jan 20 02:27:58.408569 kubelet[2509]: E0120 02:27:58.408546 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:27:58.421861 kubelet[2509]: E0120 02:27:58.409506 2509 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:27:58.421861 kubelet[2509]: E0120 02:27:58.412826 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:27:59.524554 kubelet[2509]: E0120 02:27:59.518342 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:27:59.524554 kubelet[2509]: E0120 02:27:59.519291 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:27:59.541422 kubelet[2509]: E0120 02:27:59.533397 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:28:00.570191 kubelet[2509]: E0120 02:28:00.563335 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:28:00.570191 kubelet[2509]: E0120 02:28:00.567410 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:28:01.578312 kubelet[2509]: E0120 02:28:01.577822 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:28:01.686567 kubelet[2509]: I0120 02:28:01.685998 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:28:01.977885 kubelet[2509]: E0120 02:28:01.977425 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:28:05.120998 kubelet[2509]: E0120 02:28:05.118245 2509 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:28:05.761404 kubelet[2509]: E0120 02:28:05.760107 2509 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:28:13.280568 kubelet[2509]: I0120 02:28:13.280338 2509 apiserver.go:52] "Watching apiserver" Jan 20 02:28:13.508846 kubelet[2509]: E0120 02:28:13.508285 2509 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 02:28:13.558245 kubelet[2509]: I0120 02:28:13.556573 2509 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:28:13.870575 kubelet[2509]: E0120 02:28:13.835060 2509 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4f687b5e4510 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:27:43.814731024 +0000 UTC m=+2.937505022,LastTimestamp:2026-01-20 02:27:43.814731024 +0000 UTC m=+2.937505022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:28:13.870575 kubelet[2509]: I0120 02:28:13.859710 2509 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:28:13.870575 kubelet[2509]: E0120 02:28:13.860062 2509 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 02:28:13.962659 kubelet[2509]: I0120 02:28:13.958060 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:28:14.145506 kubelet[2509]: E0120 02:28:14.143121 2509 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4f687e97ec0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:27:43.868840974 +0000 UTC m=+2.991614973,LastTimestamp:2026-01-20 02:27:43.868840974 +0000 UTC m=+2.991614973,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:28:14.197336 kubelet[2509]: I0120 02:28:14.197296 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:28:14.335591 kubelet[2509]: I0120 02:28:14.319713 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:28:14.565872 kubelet[2509]: I0120 02:28:14.534589 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:28:14.722409 kubelet[2509]: E0120 02:28:14.720594 2509 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 02:28:15.798221 kubelet[2509]: I0120 02:28:15.797796 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7977429219999999 podStartE2EDuration="1.797742922s" podCreationTimestamp="2026-01-20 02:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:28:15.672802652 +0000 UTC m=+34.795576650" watchObservedRunningTime="2026-01-20 02:28:15.797742922 +0000 UTC m=+34.920516920" Jan 20 02:28:15.798221 kubelet[2509]: I0120 02:28:15.798003 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.797996134 podStartE2EDuration="1.797996134s" podCreationTimestamp="2026-01-20 02:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:28:15.797496142 +0000 UTC m=+34.920270150" watchObservedRunningTime="2026-01-20 02:28:15.797996134 +0000 UTC m=+34.920770222" Jan 20 02:28:16.467652 kubelet[2509]: I0120 02:28:16.444012 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.443984195 podStartE2EDuration="2.443984195s" podCreationTimestamp="2026-01-20 02:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:28:16.12969958 +0000 UTC m=+35.252473578" watchObservedRunningTime="2026-01-20 02:28:16.443984195 +0000 UTC m=+35.566758204" Jan 20 02:28:27.231023 update_engine[1585]: I20260120 02:28:27.220700 1585 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 02:28:27.231023 update_engine[1585]: I20260120 02:28:27.220772 1585 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 02:28:27.231023 update_engine[1585]: I20260120 02:28:27.221026 1585 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 02:28:27.298597 update_engine[1585]: I20260120 02:28:27.270834 1585 omaha_request_params.cc:62] Current group set to stable Jan 20 02:28:27.298597 update_engine[1585]: I20260120 02:28:27.271049 1585 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 02:28:27.298597 update_engine[1585]: I20260120 02:28:27.271075 1585 update_attempter.cc:643] Scheduling an action processor start. Jan 20 02:28:27.298597 update_engine[1585]: I20260120 02:28:27.271270 1585 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:28:27.298597 update_engine[1585]: I20260120 02:28:27.271517 1585 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 02:28:27.298597 update_engine[1585]: I20260120 02:28:27.273612 1585 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:28:27.298597 update_engine[1585]: I20260120 02:28:27.273631 1585 omaha_request_action.cc:272] Request: Jan 20 02:28:27.298597 update_engine[1585]: Jan 20 02:28:27.298597 update_engine[1585]: Jan 20 02:28:27.298597 update_engine[1585]: Jan 20 02:28:27.298597 update_engine[1585]: Jan 20 02:28:27.298597 update_engine[1585]: Jan 20 02:28:27.298597 update_engine[1585]: Jan 20 02:28:27.298597 update_engine[1585]: Jan 20 02:28:27.298597 update_engine[1585]: Jan 20 02:28:27.298597 update_engine[1585]: I20260120 02:28:27.273644 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:28:27.298597 update_engine[1585]: I20260120 02:28:27.291581 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:28:27.313596 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 02:28:27.315288 update_engine[1585]: I20260120 02:28:27.315045 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:28:27.340355 update_engine[1585]: E20260120 02:28:27.338521 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:28:27.340355 update_engine[1585]: I20260120 02:28:27.338699 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 02:28:29.068126 systemd[1]: Reload requested from client PID 2800 ('systemctl') (unit session-7.scope)... Jan 20 02:28:29.068152 systemd[1]: Reloading... Jan 20 02:28:30.487784 zram_generator::config[2846]: No configuration found. Jan 20 02:28:33.020044 systemd[1]: Reloading finished in 3933 ms. Jan 20 02:28:33.256101 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:28:33.354338 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:28:33.354828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:28:33.354899 systemd[1]: kubelet.service: Consumed 7.084s CPU time, 139.2M memory peak. Jan 20 02:28:33.391079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:28:34.681570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:28:34.715158 (kubelet)[2887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:28:35.058296 kubelet[2887]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:28:35.058296 kubelet[2887]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:28:35.058296 kubelet[2887]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:28:35.058296 kubelet[2887]: I0120 02:28:35.058162 2887 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:28:35.162095 kubelet[2887]: I0120 02:28:35.151190 2887 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 02:28:35.162095 kubelet[2887]: I0120 02:28:35.151245 2887 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:28:35.168473 kubelet[2887]: I0120 02:28:35.164415 2887 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 02:28:35.193499 kubelet[2887]: I0120 02:28:35.189746 2887 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 02:28:35.224780 kubelet[2887]: I0120 02:28:35.216281 2887 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:28:35.315645 kubelet[2887]: I0120 02:28:35.314799 2887 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:28:35.423731 kubelet[2887]: I0120 02:28:35.414681 2887 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 02:28:35.423731 kubelet[2887]: I0120 02:28:35.415084 2887 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:28:35.423731 kubelet[2887]: I0120 02:28:35.415128 2887 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:28:35.423731 kubelet[2887]: I0120 02:28:35.415485 2887 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:28:35.424629 kubelet[2887]: I0120 02:28:35.415504 2887 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 02:28:35.424629 kubelet[2887]: I0120 02:28:35.415567 2887 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:28:35.424629 kubelet[2887]: I0120 02:28:35.415801 2887 kubelet.go:480] "Attempting to sync node with API server" Jan 20 02:28:35.424629 kubelet[2887]: I0120 02:28:35.415818 2887 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:28:35.424629 kubelet[2887]: I0120 02:28:35.415852 2887 kubelet.go:386] "Adding apiserver pod source" Jan 20 02:28:35.424629 kubelet[2887]: I0120 02:28:35.415867 2887 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:28:35.449492 kubelet[2887]: I0120 02:28:35.446143 2887 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 02:28:35.449492 kubelet[2887]: I0120 02:28:35.446850 2887 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 02:28:35.522360 kubelet[2887]: I0120 02:28:35.518172 2887 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 02:28:35.522360 kubelet[2887]: I0120 02:28:35.518260 2887 server.go:1289] "Started kubelet" Jan 20 02:28:35.522360 kubelet[2887]: I0120 02:28:35.520927 2887 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:28:35.522360 kubelet[2887]: I0120 02:28:35.521081 2887 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:28:35.555940 kubelet[2887]: I0120 02:28:35.551661 2887 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:28:35.555940 kubelet[2887]: I0120 02:28:35.554988 2887 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:28:35.555940 kubelet[2887]: I0120 02:28:35.555539 2887 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:28:35.586545 kubelet[2887]: I0120 02:28:35.571803 2887 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 02:28:35.586545 kubelet[2887]: I0120 02:28:35.572230 2887 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 02:28:35.586545 kubelet[2887]: I0120 02:28:35.574948 2887 factory.go:223] Registration of the systemd container factory successfully Jan 20 02:28:35.586545 kubelet[2887]: I0120 02:28:35.575106 2887 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:28:35.586545 kubelet[2887]: I0120 02:28:35.578602 2887 reconciler.go:26] "Reconciler: start to sync state" Jan 20 02:28:35.622486 kubelet[2887]: I0120 02:28:35.619968 2887 server.go:317] "Adding debug handlers to kubelet server" Jan 20 02:28:35.622486 kubelet[2887]: I0120 02:28:35.622254 2887 factory.go:223] Registration of the containerd container factory successfully Jan 20 02:28:35.652741 kubelet[2887]: E0120 02:28:35.639243 2887 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:28:35.907971 kubelet[2887]: I0120 02:28:35.897271 2887 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 02:28:35.980244 kubelet[2887]: I0120 02:28:35.980199 2887 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 02:28:35.980507 kubelet[2887]: I0120 02:28:35.980494 2887 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 02:28:35.980596 kubelet[2887]: I0120 02:28:35.980583 2887 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:28:35.980689 kubelet[2887]: I0120 02:28:35.980678 2887 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 02:28:35.980820 kubelet[2887]: E0120 02:28:35.980784 2887 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:28:36.083506 kubelet[2887]: E0120 02:28:36.081805 2887 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:28:36.150832 kubelet[2887]: I0120 02:28:36.147498 2887 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:28:36.150832 kubelet[2887]: I0120 02:28:36.147521 2887 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:28:36.150832 kubelet[2887]: I0120 02:28:36.147555 2887 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:28:36.150832 kubelet[2887]: I0120 02:28:36.147760 2887 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 02:28:36.150832 kubelet[2887]: I0120 02:28:36.147773 2887 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 02:28:36.150832 kubelet[2887]: I0120 02:28:36.147795 2887 policy_none.go:49] "None policy: Start" Jan 20 02:28:36.150832 kubelet[2887]: I0120 02:28:36.147808 2887 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 02:28:36.150832 kubelet[2887]: I0120 02:28:36.147823 2887 state_mem.go:35] "Initializing new in-memory state store" Jan 20 02:28:36.150832 kubelet[2887]: I0120 02:28:36.147935 2887 state_mem.go:75] "Updated machine memory state" Jan 20 02:28:36.199406 kubelet[2887]: E0120 02:28:36.197776 2887 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 02:28:36.199406 kubelet[2887]: I0120 02:28:36.198023 2887 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:28:36.199406 kubelet[2887]: I0120 02:28:36.198035 2887 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:28:36.229853 kubelet[2887]: I0120 02:28:36.216972 2887 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 02:28:36.239426 kubelet[2887]: I0120 02:28:36.231636 2887 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:28:36.239600 containerd[1598]: time="2026-01-20T02:28:36.235088994Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 02:28:36.263732 kubelet[2887]: I0120 02:28:36.263632 2887 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 02:28:36.304764 kubelet[2887]: E0120 02:28:36.279294 2887 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:28:36.318961 kubelet[2887]: I0120 02:28:36.307886 2887 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:28:36.343304 kubelet[2887]: I0120 02:28:36.308099 2887 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:28:36.343304 kubelet[2887]: I0120 02:28:36.308275 2887 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:28:36.444786 kubelet[2887]: I0120 02:28:36.443696 2887 apiserver.go:52] "Watching apiserver" Jan 20 02:28:36.444786 kubelet[2887]: I0120 02:28:36.444131 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:28:36.444786 kubelet[2887]: I0120 02:28:36.444172 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:28:36.444786 kubelet[2887]: I0120 02:28:36.444206 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6310083ba1b0d7f846373d05315aa17b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6310083ba1b0d7f846373d05315aa17b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:28:36.444786 kubelet[2887]: I0120 02:28:36.444242 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6310083ba1b0d7f846373d05315aa17b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6310083ba1b0d7f846373d05315aa17b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:28:36.444786 kubelet[2887]: I0120 02:28:36.444273 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:28:36.445139 kubelet[2887]: I0120 02:28:36.444300 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:28:36.445139 kubelet[2887]: I0120 02:28:36.444363 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:28:36.445139 kubelet[2887]: I0120 02:28:36.444396 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:28:36.445139 kubelet[2887]: I0120 02:28:36.444421 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6310083ba1b0d7f846373d05315aa17b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6310083ba1b0d7f846373d05315aa17b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:28:36.455783 kubelet[2887]: I0120 02:28:36.449620 2887 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:28:36.469425 kubelet[2887]: E0120 02:28:36.465193 2887 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:28:36.541217 kubelet[2887]: E0120 02:28:36.541177 2887 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 02:28:36.541643 kubelet[2887]: E0120 02:28:36.541608 2887 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 02:28:36.580371 kubelet[2887]: I0120 02:28:36.580292 2887 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 02:28:36.626607 kubelet[2887]: I0120 02:28:36.623833 2887 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 02:28:36.626946 kubelet[2887]: I0120 02:28:36.626919 2887 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:28:36.653096 systemd[1]: Created slice kubepods-besteffort-podce400ebc_85dc_4d2c_b9f5_b81c4574ebf0.slice - libcontainer container kubepods-besteffort-podce400ebc_85dc_4d2c_b9f5_b81c4574ebf0.slice. Jan 20 02:28:36.662869 kubelet[2887]: I0120 02:28:36.661852 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0-xtables-lock\") pod \"kube-proxy-vh8mb\" (UID: \"ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0\") " pod="kube-system/kube-proxy-vh8mb" Jan 20 02:28:36.662869 kubelet[2887]: I0120 02:28:36.661904 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0-lib-modules\") pod \"kube-proxy-vh8mb\" (UID: \"ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0\") " pod="kube-system/kube-proxy-vh8mb" Jan 20 02:28:36.662869 kubelet[2887]: I0120 02:28:36.661939 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qfm\" (UniqueName: \"kubernetes.io/projected/ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0-kube-api-access-b7qfm\") pod \"kube-proxy-vh8mb\" (UID: \"ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0\") " pod="kube-system/kube-proxy-vh8mb" Jan 20 02:28:36.662869 kubelet[2887]: I0120 02:28:36.661979 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0-kube-proxy\") pod \"kube-proxy-vh8mb\" (UID: \"ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0\") " pod="kube-system/kube-proxy-vh8mb" Jan 20 02:28:37.012934 containerd[1598]: time="2026-01-20T02:28:37.010258823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vh8mb,Uid:ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0,Namespace:kube-system,Attempt:0,}" Jan 20 02:28:37.177980 containerd[1598]: time="2026-01-20T02:28:37.176693431Z" level=info msg="connecting to shim e524f65fc229660fb4f0aa48c695621c07d8ca988064e324fd4fe0e3965eb522" address="unix:///run/containerd/s/d12c172c85fc0d76bb37b022174e27d8c362cb4a1253fd071c939016471c0eb3" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:28:37.217306 update_engine[1585]: I20260120 02:28:37.217231 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:28:37.218556 update_engine[1585]: I20260120 02:28:37.218040 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:28:37.225543 update_engine[1585]: I20260120 02:28:37.225382 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:28:37.243769 update_engine[1585]: E20260120 02:28:37.243582 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:28:37.243769 update_engine[1585]: I20260120 02:28:37.243724 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 02:28:37.387130 systemd[1]: Started cri-containerd-e524f65fc229660fb4f0aa48c695621c07d8ca988064e324fd4fe0e3965eb522.scope - libcontainer container e524f65fc229660fb4f0aa48c695621c07d8ca988064e324fd4fe0e3965eb522. Jan 20 02:28:37.779195 containerd[1598]: time="2026-01-20T02:28:37.772171170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vh8mb,Uid:ce400ebc-85dc-4d2c-b9f5-b81c4574ebf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e524f65fc229660fb4f0aa48c695621c07d8ca988064e324fd4fe0e3965eb522\"" Jan 20 02:28:37.806483 containerd[1598]: time="2026-01-20T02:28:37.801911201Z" level=info msg="CreateContainer within sandbox \"e524f65fc229660fb4f0aa48c695621c07d8ca988064e324fd4fe0e3965eb522\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 02:28:37.874293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967504483.mount: Deactivated successfully. Jan 20 02:28:37.886067 containerd[1598]: time="2026-01-20T02:28:37.884160851Z" level=info msg="Container 682547be7471373135b7a28e4816ac931112c2def6ce803ea2b640bbec12a147: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:28:37.954170 containerd[1598]: time="2026-01-20T02:28:37.954034074Z" level=info msg="CreateContainer within sandbox \"e524f65fc229660fb4f0aa48c695621c07d8ca988064e324fd4fe0e3965eb522\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"682547be7471373135b7a28e4816ac931112c2def6ce803ea2b640bbec12a147\"" Jan 20 02:28:37.965516 containerd[1598]: time="2026-01-20T02:28:37.965228333Z" level=info msg="StartContainer for \"682547be7471373135b7a28e4816ac931112c2def6ce803ea2b640bbec12a147\"" Jan 20 02:28:37.975824 containerd[1598]: time="2026-01-20T02:28:37.974846384Z" level=info msg="connecting to shim 682547be7471373135b7a28e4816ac931112c2def6ce803ea2b640bbec12a147" address="unix:///run/containerd/s/d12c172c85fc0d76bb37b022174e27d8c362cb4a1253fd071c939016471c0eb3" protocol=ttrpc version=3 Jan 20 02:28:38.128241 systemd[1]: Started cri-containerd-682547be7471373135b7a28e4816ac931112c2def6ce803ea2b640bbec12a147.scope - libcontainer container 682547be7471373135b7a28e4816ac931112c2def6ce803ea2b640bbec12a147. Jan 20 02:28:38.681061 containerd[1598]: time="2026-01-20T02:28:38.672112094Z" level=info msg="StartContainer for \"682547be7471373135b7a28e4816ac931112c2def6ce803ea2b640bbec12a147\" returns successfully" Jan 20 02:28:41.394535 kubelet[2887]: I0120 02:28:41.380075 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vh8mb" podStartSLOduration=6.380050397 podStartE2EDuration="6.380050397s" podCreationTimestamp="2026-01-20 02:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:28:39.419146581 +0000 UTC m=+4.645552047" watchObservedRunningTime="2026-01-20 02:28:41.380050397 +0000 UTC m=+6.606455833" Jan 20 02:28:41.514049 systemd[1]: Created slice kubepods-burstable-pod542ae9ef_c83a_47d3_8759_eff39cf7b0f2.slice - libcontainer container kubepods-burstable-pod542ae9ef_c83a_47d3_8759_eff39cf7b0f2.slice. Jan 20 02:28:41.533159 kubelet[2887]: I0120 02:28:41.528773 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/542ae9ef-c83a-47d3-8759-eff39cf7b0f2-xtables-lock\") pod \"kube-flannel-ds-lz5k7\" (UID: \"542ae9ef-c83a-47d3-8759-eff39cf7b0f2\") " pod="kube-flannel/kube-flannel-ds-lz5k7" Jan 20 02:28:41.533159 kubelet[2887]: I0120 02:28:41.528838 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2wg\" (UniqueName: \"kubernetes.io/projected/542ae9ef-c83a-47d3-8759-eff39cf7b0f2-kube-api-access-rb2wg\") pod \"kube-flannel-ds-lz5k7\" (UID: \"542ae9ef-c83a-47d3-8759-eff39cf7b0f2\") " pod="kube-flannel/kube-flannel-ds-lz5k7" Jan 20 02:28:41.533159 kubelet[2887]: I0120 02:28:41.528871 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/542ae9ef-c83a-47d3-8759-eff39cf7b0f2-run\") pod \"kube-flannel-ds-lz5k7\" (UID: \"542ae9ef-c83a-47d3-8759-eff39cf7b0f2\") " pod="kube-flannel/kube-flannel-ds-lz5k7" Jan 20 02:28:41.533159 kubelet[2887]: I0120 02:28:41.528900 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/542ae9ef-c83a-47d3-8759-eff39cf7b0f2-cni\") pod \"kube-flannel-ds-lz5k7\" (UID: \"542ae9ef-c83a-47d3-8759-eff39cf7b0f2\") " pod="kube-flannel/kube-flannel-ds-lz5k7" Jan 20 02:28:41.533159 kubelet[2887]: I0120 02:28:41.528943 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/542ae9ef-c83a-47d3-8759-eff39cf7b0f2-cni-plugin\") pod \"kube-flannel-ds-lz5k7\" (UID: \"542ae9ef-c83a-47d3-8759-eff39cf7b0f2\") " pod="kube-flannel/kube-flannel-ds-lz5k7" Jan 20 02:28:41.554601 kubelet[2887]: I0120 02:28:41.528969 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/542ae9ef-c83a-47d3-8759-eff39cf7b0f2-flannel-cfg\") pod \"kube-flannel-ds-lz5k7\" (UID: \"542ae9ef-c83a-47d3-8759-eff39cf7b0f2\") " pod="kube-flannel/kube-flannel-ds-lz5k7" Jan 20 02:28:41.839723 containerd[1598]: time="2026-01-20T02:28:41.839664247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lz5k7,Uid:542ae9ef-c83a-47d3-8759-eff39cf7b0f2,Namespace:kube-flannel,Attempt:0,}" Jan 20 02:28:42.196565 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 20 02:28:42.213710 containerd[1598]: time="2026-01-20T02:28:42.212266530Z" level=info msg="connecting to shim 8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63" address="unix:///run/containerd/s/c600f5d5f1c3b71d57d17c8dbbb1cb19f8080de1a3605c9806cd52f91261b708" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:28:42.228509 sshd[1763]: Connection closed by 10.0.0.1 port 37192 Jan 20 02:28:42.225856 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 20 02:28:42.299984 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:37192.service: Deactivated successfully. Jan 20 02:28:42.323305 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 02:28:42.334386 systemd[1]: session-7.scope: Consumed 14.304s CPU time, 226.9M memory peak. Jan 20 02:28:42.350647 systemd-logind[1583]: Session 7 logged out. Waiting for processes to exit. Jan 20 02:28:42.365498 systemd-logind[1583]: Removed session 7. Jan 20 02:28:42.595820 systemd[1]: Started cri-containerd-8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63.scope - libcontainer container 8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63. Jan 20 02:28:42.885128 containerd[1598]: time="2026-01-20T02:28:42.882922044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lz5k7,Uid:542ae9ef-c83a-47d3-8759-eff39cf7b0f2,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63\"" Jan 20 02:28:42.907576 containerd[1598]: time="2026-01-20T02:28:42.904737701Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 20 02:28:45.735561 kubelet[2887]: E0120 02:28:45.728356 2887 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.62s" Jan 20 02:28:47.295954 update_engine[1585]: I20260120 02:28:47.295403 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:28:47.301814 update_engine[1585]: I20260120 02:28:47.301756 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:28:47.316148 update_engine[1585]: I20260120 02:28:47.302428 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:28:47.334533 update_engine[1585]: E20260120 02:28:47.334211 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:28:47.334533 update_engine[1585]: I20260120 02:28:47.334386 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 02:28:48.998830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464250455.mount: Deactivated successfully. Jan 20 02:28:49.656815 containerd[1598]: time="2026-01-20T02:28:49.656615153Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jan 20 02:28:49.671144 containerd[1598]: time="2026-01-20T02:28:49.671067893Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:49.692659 containerd[1598]: time="2026-01-20T02:28:49.692409064Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:49.723898 containerd[1598]: time="2026-01-20T02:28:49.723086702Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:28:49.736494 containerd[1598]: time="2026-01-20T02:28:49.729644421Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 6.824851899s" Jan 20 02:28:49.736494 containerd[1598]: time="2026-01-20T02:28:49.733674567Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 20 02:28:49.779619 containerd[1598]: time="2026-01-20T02:28:49.779566525Z" level=info msg="CreateContainer within sandbox \"8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 02:28:50.000960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329487772.mount: Deactivated successfully. Jan 20 02:28:50.086259 containerd[1598]: time="2026-01-20T02:28:50.079833946Z" level=info msg="Container c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:28:50.136308 containerd[1598]: time="2026-01-20T02:28:50.136028578Z" level=info msg="CreateContainer within sandbox \"8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135\"" Jan 20 02:28:50.140714 containerd[1598]: time="2026-01-20T02:28:50.137392552Z" level=info msg="StartContainer for \"c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135\"" Jan 20 02:28:50.167899 containerd[1598]: time="2026-01-20T02:28:50.164858111Z" level=info msg="connecting to shim c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135" address="unix:///run/containerd/s/c600f5d5f1c3b71d57d17c8dbbb1cb19f8080de1a3605c9806cd52f91261b708" protocol=ttrpc version=3 Jan 20 02:28:50.334951 systemd[1]: Started cri-containerd-c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135.scope - libcontainer container c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135. Jan 20 02:28:51.063724 systemd[1]: cri-containerd-c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135.scope: Deactivated successfully. Jan 20 02:28:51.130120 containerd[1598]: time="2026-01-20T02:28:51.125172553Z" level=info msg="received container exit event container_id:\"c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135\" id:\"c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135\" pid:3240 exited_at:{seconds:1768876131 nanos:69867949}" Jan 20 02:28:51.130120 containerd[1598]: time="2026-01-20T02:28:51.128179952Z" level=info msg="StartContainer for \"c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135\" returns successfully" Jan 20 02:28:51.435251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135-rootfs.mount: Deactivated successfully. Jan 20 02:28:52.517872 containerd[1598]: time="2026-01-20T02:28:52.515306232Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 20 02:28:57.223194 update_engine[1585]: I20260120 02:28:57.223102 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.224077 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.224694 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:28:57.263341 update_engine[1585]: E20260120 02:28:57.255662 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.255786 1585 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.255801 1585 omaha_request_action.cc:617] Omaha request response: Jan 20 02:28:57.263341 update_engine[1585]: E20260120 02:28:57.255945 1585 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.256124 1585 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.256132 1585 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.256141 1585 update_attempter.cc:306] Processing Done. Jan 20 02:28:57.263341 update_engine[1585]: E20260120 02:28:57.256160 1585 update_attempter.cc:619] Update failed. Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.256168 1585 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.256177 1585 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.256186 1585 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.256593 1585 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.256886 1585 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:28:57.263341 update_engine[1585]: I20260120 02:28:57.256903 1585 omaha_request_action.cc:272] Request: Jan 20 02:28:57.263341 update_engine[1585]: Jan 20 02:28:57.263341 update_engine[1585]: Jan 20 02:28:57.264098 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 02:28:57.264728 update_engine[1585]: Jan 20 02:28:57.264728 update_engine[1585]: Jan 20 02:28:57.264728 update_engine[1585]: Jan 20 02:28:57.264728 update_engine[1585]: Jan 20 02:28:57.264728 update_engine[1585]: I20260120 02:28:57.256999 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:28:57.264728 update_engine[1585]: I20260120 02:28:57.257032 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:28:57.264728 update_engine[1585]: I20260120 02:28:57.258352 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:28:57.295240 update_engine[1585]: E20260120 02:28:57.291690 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 02:28:57.295240 update_engine[1585]: I20260120 02:28:57.291852 1585 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:28:57.295240 update_engine[1585]: I20260120 02:28:57.291872 1585 omaha_request_action.cc:617] Omaha request response: Jan 20 02:28:57.295240 update_engine[1585]: I20260120 02:28:57.291883 1585 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:28:57.295240 update_engine[1585]: I20260120 02:28:57.291891 1585 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:28:57.295240 update_engine[1585]: I20260120 02:28:57.291899 1585 update_attempter.cc:306] Processing Done. Jan 20 02:28:57.295240 update_engine[1585]: I20260120 02:28:57.291914 1585 update_attempter.cc:310] Error event sent. Jan 20 02:28:57.295240 update_engine[1585]: I20260120 02:28:57.291930 1585 update_check_scheduler.cc:74] Next update check in 46m50s Jan 20 02:28:57.295769 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 02:29:05.788028 containerd[1598]: time="2026-01-20T02:29:05.787935076Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:29:05.798920 containerd[1598]: time="2026-01-20T02:29:05.798860041Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jan 20 02:29:05.872541 containerd[1598]: time="2026-01-20T02:29:05.870962054Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:29:05.923808 containerd[1598]: time="2026-01-20T02:29:05.923401643Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:29:05.930945 containerd[1598]: time="2026-01-20T02:29:05.928321937Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 13.412955953s" Jan 20 02:29:05.930945 containerd[1598]: time="2026-01-20T02:29:05.928367592Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 20 02:29:06.013049 containerd[1598]: time="2026-01-20T02:29:06.012997491Z" level=info msg="CreateContainer within sandbox \"8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 02:29:06.198237 containerd[1598]: time="2026-01-20T02:29:06.188056882Z" level=info msg="Container 844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:29:06.309273 containerd[1598]: time="2026-01-20T02:29:06.306374251Z" level=info msg="CreateContainer within sandbox \"8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60\"" Jan 20 02:29:06.331359 containerd[1598]: time="2026-01-20T02:29:06.323793130Z" level=info msg="StartContainer for \"844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60\"" Jan 20 02:29:06.340105 containerd[1598]: time="2026-01-20T02:29:06.339614500Z" level=info msg="connecting to shim 844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60" address="unix:///run/containerd/s/c600f5d5f1c3b71d57d17c8dbbb1cb19f8080de1a3605c9806cd52f91261b708" protocol=ttrpc version=3 Jan 20 02:29:06.704725 systemd[1]: Started cri-containerd-844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60.scope - libcontainer container 844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60. Jan 20 02:29:07.340519 systemd[1]: cri-containerd-844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60.scope: Deactivated successfully. Jan 20 02:29:07.446931 kubelet[2887]: I0120 02:29:07.430589 2887 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 02:29:07.473742 containerd[1598]: time="2026-01-20T02:29:07.473547356Z" level=info msg="received container exit event container_id:\"844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60\" id:\"844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60\" pid:3314 exited_at:{seconds:1768876147 nanos:369599568}" Jan 20 02:29:07.519196 containerd[1598]: time="2026-01-20T02:29:07.495551273Z" level=info msg="StartContainer for \"844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60\" returns successfully" Jan 20 02:29:08.040393 kubelet[2887]: I0120 02:29:08.032910 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nszb\" (UniqueName: \"kubernetes.io/projected/31b5e3f9-79b4-4822-beb4-a5817ee05e11-kube-api-access-8nszb\") pod \"coredns-674b8bbfcf-pfsrh\" (UID: \"31b5e3f9-79b4-4822-beb4-a5817ee05e11\") " pod="kube-system/coredns-674b8bbfcf-pfsrh" Jan 20 02:29:08.040393 kubelet[2887]: I0120 02:29:08.039255 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31b5e3f9-79b4-4822-beb4-a5817ee05e11-config-volume\") pod \"coredns-674b8bbfcf-pfsrh\" (UID: \"31b5e3f9-79b4-4822-beb4-a5817ee05e11\") " pod="kube-system/coredns-674b8bbfcf-pfsrh" Jan 20 02:29:08.268528 kubelet[2887]: I0120 02:29:08.262516 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkbpw\" (UniqueName: \"kubernetes.io/projected/a760862c-884d-46cc-a67c-c8b09a2778e6-kube-api-access-bkbpw\") pod \"coredns-674b8bbfcf-99gdz\" (UID: \"a760862c-884d-46cc-a67c-c8b09a2778e6\") " pod="kube-system/coredns-674b8bbfcf-99gdz" Jan 20 02:29:08.268528 kubelet[2887]: I0120 02:29:08.262700 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a760862c-884d-46cc-a67c-c8b09a2778e6-config-volume\") pod \"coredns-674b8bbfcf-99gdz\" (UID: \"a760862c-884d-46cc-a67c-c8b09a2778e6\") " pod="kube-system/coredns-674b8bbfcf-99gdz" Jan 20 02:29:08.435272 systemd[1]: Created slice kubepods-burstable-pod31b5e3f9_79b4_4822_beb4_a5817ee05e11.slice - libcontainer container kubepods-burstable-pod31b5e3f9_79b4_4822_beb4_a5817ee05e11.slice. Jan 20 02:29:08.776672 containerd[1598]: time="2026-01-20T02:29:08.750062180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pfsrh,Uid:31b5e3f9-79b4-4822-beb4-a5817ee05e11,Namespace:kube-system,Attempt:0,}" Jan 20 02:29:08.993664 systemd[1]: Created slice kubepods-burstable-poda760862c_884d_46cc_a67c_c8b09a2778e6.slice - libcontainer container kubepods-burstable-poda760862c_884d_46cc_a67c_c8b09a2778e6.slice. Jan 20 02:29:09.067726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60-rootfs.mount: Deactivated successfully. Jan 20 02:29:09.129934 containerd[1598]: time="2026-01-20T02:29:09.129213135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99gdz,Uid:a760862c-884d-46cc-a67c-c8b09a2778e6,Namespace:kube-system,Attempt:0,}" Jan 20 02:29:09.547322 systemd[1]: run-netns-cni\x2db30fa745\x2d6c74\x2d4c9b\x2d9c84\x2dc9e912fefc62.mount: Deactivated successfully. Jan 20 02:29:09.642045 systemd[1]: run-netns-cni\x2dbed455f2\x2d513f\x2ddc57\x2da35d\x2dce409f20fdfc.mount: Deactivated successfully. Jan 20 02:29:09.695318 containerd[1598]: time="2026-01-20T02:29:09.695244547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pfsrh,Uid:31b5e3f9-79b4-4822-beb4-a5817ee05e11,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29dbde5b92d9eeb85187118fa6f3befd92a1f8978669c4328ec312a66bfe9180\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:29:09.702272 kubelet[2887]: E0120 02:29:09.696695 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29dbde5b92d9eeb85187118fa6f3befd92a1f8978669c4328ec312a66bfe9180\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:29:09.702272 kubelet[2887]: E0120 02:29:09.696844 2887 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29dbde5b92d9eeb85187118fa6f3befd92a1f8978669c4328ec312a66bfe9180\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-pfsrh" Jan 20 02:29:09.702272 kubelet[2887]: E0120 02:29:09.696907 2887 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29dbde5b92d9eeb85187118fa6f3befd92a1f8978669c4328ec312a66bfe9180\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-pfsrh" Jan 20 02:29:09.702272 kubelet[2887]: E0120 02:29:09.696973 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pfsrh_kube-system(31b5e3f9-79b4-4822-beb4-a5817ee05e11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pfsrh_kube-system(31b5e3f9-79b4-4822-beb4-a5817ee05e11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29dbde5b92d9eeb85187118fa6f3befd92a1f8978669c4328ec312a66bfe9180\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-pfsrh" podUID="31b5e3f9-79b4-4822-beb4-a5817ee05e11" Jan 20 02:29:09.714852 containerd[1598]: time="2026-01-20T02:29:09.714659003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99gdz,Uid:a760862c-884d-46cc-a67c-c8b09a2778e6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df257e2e277afa9988a001289a48f123103986739185e2f2cfed1e6d48a8e0da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:29:09.715859 kubelet[2887]: E0120 02:29:09.715161 2887 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df257e2e277afa9988a001289a48f123103986739185e2f2cfed1e6d48a8e0da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 20 02:29:09.715859 kubelet[2887]: E0120 02:29:09.715247 2887 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df257e2e277afa9988a001289a48f123103986739185e2f2cfed1e6d48a8e0da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-99gdz" Jan 20 02:29:09.715859 kubelet[2887]: E0120 02:29:09.715281 2887 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df257e2e277afa9988a001289a48f123103986739185e2f2cfed1e6d48a8e0da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-99gdz" Jan 20 02:29:09.715859 kubelet[2887]: E0120 02:29:09.715350 2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-99gdz_kube-system(a760862c-884d-46cc-a67c-c8b09a2778e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-99gdz_kube-system(a760862c-884d-46cc-a67c-c8b09a2778e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df257e2e277afa9988a001289a48f123103986739185e2f2cfed1e6d48a8e0da\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-99gdz" podUID="a760862c-884d-46cc-a67c-c8b09a2778e6" Jan 20 02:29:09.959342 containerd[1598]: time="2026-01-20T02:29:09.957352377Z" level=info msg="CreateContainer within sandbox \"8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 02:29:10.182498 containerd[1598]: time="2026-01-20T02:29:10.182015876Z" level=info msg="Container b557a2fb598c375d1bd6889bc25a7f058da01c7049886a3498c25a349169539e: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:29:10.258092 containerd[1598]: time="2026-01-20T02:29:10.257707293Z" level=info msg="CreateContainer within sandbox \"8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"b557a2fb598c375d1bd6889bc25a7f058da01c7049886a3498c25a349169539e\"" Jan 20 02:29:10.266131 containerd[1598]: time="2026-01-20T02:29:10.266085925Z" level=info msg="StartContainer for \"b557a2fb598c375d1bd6889bc25a7f058da01c7049886a3498c25a349169539e\"" Jan 20 02:29:10.287133 containerd[1598]: time="2026-01-20T02:29:10.287056193Z" level=info msg="connecting to shim b557a2fb598c375d1bd6889bc25a7f058da01c7049886a3498c25a349169539e" address="unix:///run/containerd/s/c600f5d5f1c3b71d57d17c8dbbb1cb19f8080de1a3605c9806cd52f91261b708" protocol=ttrpc version=3 Jan 20 02:29:10.516116 systemd[1]: Started cri-containerd-b557a2fb598c375d1bd6889bc25a7f058da01c7049886a3498c25a349169539e.scope - libcontainer container b557a2fb598c375d1bd6889bc25a7f058da01c7049886a3498c25a349169539e. Jan 20 02:29:10.829115 containerd[1598]: time="2026-01-20T02:29:10.827366273Z" level=info msg="StartContainer for \"b557a2fb598c375d1bd6889bc25a7f058da01c7049886a3498c25a349169539e\" returns successfully" Jan 20 02:29:11.150032 kubelet[2887]: I0120 02:29:11.149706 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-lz5k7" podStartSLOduration=7.0890129139999996 podStartE2EDuration="30.149684595s" podCreationTimestamp="2026-01-20 02:28:41 +0000 UTC" firstStartedPulling="2026-01-20 02:28:42.902817639 +0000 UTC m=+8.129223076" lastFinishedPulling="2026-01-20 02:29:05.96348931 +0000 UTC m=+31.189894757" observedRunningTime="2026-01-20 02:29:11.137699599 +0000 UTC m=+36.364105037" watchObservedRunningTime="2026-01-20 02:29:11.149684595 +0000 UTC m=+36.376090032" Jan 20 02:29:12.784771 systemd-networkd[1486]: flannel.1: Link UP Jan 20 02:29:12.784786 systemd-networkd[1486]: flannel.1: Gained carrier Jan 20 02:29:14.209764 systemd-networkd[1486]: flannel.1: Gained IPv6LL Jan 20 02:29:20.989501 containerd[1598]: time="2026-01-20T02:29:20.988524915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pfsrh,Uid:31b5e3f9-79b4-4822-beb4-a5817ee05e11,Namespace:kube-system,Attempt:0,}" Jan 20 02:29:21.204131 systemd-networkd[1486]: cni0: Link UP Jan 20 02:29:21.204146 systemd-networkd[1486]: cni0: Gained carrier Jan 20 02:29:21.239766 systemd-networkd[1486]: cni0: Lost carrier Jan 20 02:29:21.452682 systemd-networkd[1486]: veth461cef57: Link UP Jan 20 02:29:21.501095 kernel: cni0: port 1(veth461cef57) entered blocking state Jan 20 02:29:21.501240 kernel: cni0: port 1(veth461cef57) entered disabled state Jan 20 02:29:21.501282 kernel: veth461cef57: entered allmulticast mode Jan 20 02:29:21.513246 kernel: veth461cef57: entered promiscuous mode Jan 20 02:29:21.584789 kernel: cni0: port 1(veth461cef57) entered blocking state Jan 20 02:29:21.584976 kernel: cni0: port 1(veth461cef57) entered forwarding state Jan 20 02:29:21.589225 systemd-networkd[1486]: veth461cef57: Gained carrier Jan 20 02:29:21.590784 systemd-networkd[1486]: cni0: Gained carrier Jan 20 02:29:21.624057 containerd[1598]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00008c950), "name":"cbr0", "type":"bridge"} Jan 20 02:29:21.624057 containerd[1598]: delegateAdd: netconf sent to delegate plugin: Jan 20 02:29:21.885079 containerd[1598]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T02:29:21.884322441Z" level=info msg="connecting to shim 9f57e188dad4a41865afc03dc28b6ef044e982f47bad4979fe530b6716286d08" address="unix:///run/containerd/s/70f8cbf313db792af459850af12be45cd337632ffb74d27327324ff8c6bd9309" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:29:22.186291 systemd[1]: Started cri-containerd-9f57e188dad4a41865afc03dc28b6ef044e982f47bad4979fe530b6716286d08.scope - libcontainer container 9f57e188dad4a41865afc03dc28b6ef044e982f47bad4979fe530b6716286d08. Jan 20 02:29:22.317898 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:29:22.548265 containerd[1598]: time="2026-01-20T02:29:22.543619030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pfsrh,Uid:31b5e3f9-79b4-4822-beb4-a5817ee05e11,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f57e188dad4a41865afc03dc28b6ef044e982f47bad4979fe530b6716286d08\"" Jan 20 02:29:22.585343 containerd[1598]: time="2026-01-20T02:29:22.582088782Z" level=info msg="CreateContainer within sandbox \"9f57e188dad4a41865afc03dc28b6ef044e982f47bad4979fe530b6716286d08\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:29:22.715679 containerd[1598]: time="2026-01-20T02:29:22.715233695Z" level=info msg="Container 3ac38f47584f56ec5519fb1a5a32ff3b55d09ee60fac74bfd00a093eecc651fc: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:29:22.727182 systemd-networkd[1486]: cni0: Gained IPv6LL Jan 20 02:29:22.767091 containerd[1598]: time="2026-01-20T02:29:22.767003895Z" level=info msg="CreateContainer within sandbox \"9f57e188dad4a41865afc03dc28b6ef044e982f47bad4979fe530b6716286d08\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ac38f47584f56ec5519fb1a5a32ff3b55d09ee60fac74bfd00a093eecc651fc\"" Jan 20 02:29:22.770797 containerd[1598]: time="2026-01-20T02:29:22.770759586Z" level=info msg="StartContainer for \"3ac38f47584f56ec5519fb1a5a32ff3b55d09ee60fac74bfd00a093eecc651fc\"" Jan 20 02:29:22.799033 containerd[1598]: time="2026-01-20T02:29:22.798836281Z" level=info msg="connecting to shim 3ac38f47584f56ec5519fb1a5a32ff3b55d09ee60fac74bfd00a093eecc651fc" address="unix:///run/containerd/s/70f8cbf313db792af459850af12be45cd337632ffb74d27327324ff8c6bd9309" protocol=ttrpc version=3 Jan 20 02:29:22.939637 systemd[1]: Started cri-containerd-3ac38f47584f56ec5519fb1a5a32ff3b55d09ee60fac74bfd00a093eecc651fc.scope - libcontainer container 3ac38f47584f56ec5519fb1a5a32ff3b55d09ee60fac74bfd00a093eecc651fc. Jan 20 02:29:22.994341 containerd[1598]: time="2026-01-20T02:29:22.993400202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99gdz,Uid:a760862c-884d-46cc-a67c-c8b09a2778e6,Namespace:kube-system,Attempt:0,}" Jan 20 02:29:23.177413 systemd-networkd[1486]: vethaf93e97c: Link UP Jan 20 02:29:23.198737 kernel: cni0: port 2(vethaf93e97c) entered blocking state Jan 20 02:29:23.200535 kernel: cni0: port 2(vethaf93e97c) entered disabled state Jan 20 02:29:23.209326 kernel: vethaf93e97c: entered allmulticast mode Jan 20 02:29:23.217534 kernel: vethaf93e97c: entered promiscuous mode Jan 20 02:29:23.277805 kernel: cni0: port 2(vethaf93e97c) entered blocking state Jan 20 02:29:23.277923 kernel: cni0: port 2(vethaf93e97c) entered forwarding state Jan 20 02:29:23.278119 systemd-networkd[1486]: vethaf93e97c: Gained carrier Jan 20 02:29:23.292948 containerd[1598]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jan 20 02:29:23.292948 containerd[1598]: delegateAdd: netconf sent to delegate plugin: Jan 20 02:29:23.315163 containerd[1598]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T02:29:23.310793896Z" level=info msg="StartContainer for \"3ac38f47584f56ec5519fb1a5a32ff3b55d09ee60fac74bfd00a093eecc651fc\" returns successfully" Jan 20 02:29:23.693389 systemd-networkd[1486]: veth461cef57: Gained IPv6LL Jan 20 02:29:24.298508 containerd[1598]: time="2026-01-20T02:29:24.297018852Z" level=info msg="connecting to shim f1faa4e1f47a144ff7834747c329c01bfead79119ad16167eafedc90a9bc2241" address="unix:///run/containerd/s/c8a163417d84682315793c214f3aabfa3443d819ea0b4cddbf02ab0ef7fe62e5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:29:24.505348 kubelet[2887]: I0120 02:29:24.503951 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pfsrh" podStartSLOduration=49.503927604 podStartE2EDuration="49.503927604s" podCreationTimestamp="2026-01-20 02:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:29:23.615648022 +0000 UTC m=+48.842053499" watchObservedRunningTime="2026-01-20 02:29:24.503927604 +0000 UTC m=+49.730333081" Jan 20 02:29:24.506271 systemd[1]: Started cri-containerd-f1faa4e1f47a144ff7834747c329c01bfead79119ad16167eafedc90a9bc2241.scope - libcontainer container f1faa4e1f47a144ff7834747c329c01bfead79119ad16167eafedc90a9bc2241. Jan 20 02:29:24.582566 systemd-networkd[1486]: vethaf93e97c: Gained IPv6LL Jan 20 02:29:24.725169 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:29:25.086902 containerd[1598]: time="2026-01-20T02:29:25.084589511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99gdz,Uid:a760862c-884d-46cc-a67c-c8b09a2778e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1faa4e1f47a144ff7834747c329c01bfead79119ad16167eafedc90a9bc2241\"" Jan 20 02:29:25.127233 containerd[1598]: time="2026-01-20T02:29:25.126202315Z" level=info msg="CreateContainer within sandbox \"f1faa4e1f47a144ff7834747c329c01bfead79119ad16167eafedc90a9bc2241\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:29:25.245064 containerd[1598]: time="2026-01-20T02:29:25.243065181Z" level=info msg="Container 645dc74e609dc292c46922eafd32a7d2bbf4889c719ab00e7d616081d479c0bf: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:29:25.313224 containerd[1598]: time="2026-01-20T02:29:25.313079589Z" level=info msg="CreateContainer within sandbox \"f1faa4e1f47a144ff7834747c329c01bfead79119ad16167eafedc90a9bc2241\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"645dc74e609dc292c46922eafd32a7d2bbf4889c719ab00e7d616081d479c0bf\"" Jan 20 02:29:25.358365 containerd[1598]: time="2026-01-20T02:29:25.340566814Z" level=info msg="StartContainer for \"645dc74e609dc292c46922eafd32a7d2bbf4889c719ab00e7d616081d479c0bf\"" Jan 20 02:29:25.358365 containerd[1598]: time="2026-01-20T02:29:25.355730223Z" level=info msg="connecting to shim 645dc74e609dc292c46922eafd32a7d2bbf4889c719ab00e7d616081d479c0bf" address="unix:///run/containerd/s/c8a163417d84682315793c214f3aabfa3443d819ea0b4cddbf02ab0ef7fe62e5" protocol=ttrpc version=3 Jan 20 02:29:25.687135 systemd[1]: Started cri-containerd-645dc74e609dc292c46922eafd32a7d2bbf4889c719ab00e7d616081d479c0bf.scope - libcontainer container 645dc74e609dc292c46922eafd32a7d2bbf4889c719ab00e7d616081d479c0bf. Jan 20 02:29:26.337518 containerd[1598]: time="2026-01-20T02:29:26.337302780Z" level=info msg="StartContainer for \"645dc74e609dc292c46922eafd32a7d2bbf4889c719ab00e7d616081d479c0bf\" returns successfully" Jan 20 02:29:27.625483 kubelet[2887]: I0120 02:29:27.616423 2887 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-99gdz" podStartSLOduration=52.616400651 podStartE2EDuration="52.616400651s" podCreationTimestamp="2026-01-20 02:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:29:26.753862696 +0000 UTC m=+51.980268163" watchObservedRunningTime="2026-01-20 02:29:27.616400651 +0000 UTC m=+52.842806228" Jan 20 02:29:44.793837 kubelet[2887]: E0120 02:29:44.793754 2887 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.803s" Jan 20 02:29:53.254027 kubelet[2887]: E0120 02:29:53.242365 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:53.413536 kubelet[2887]: E0120 02:29:53.385953 2887 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.751s" Jan 20 02:29:56.989019 kubelet[2887]: E0120 02:29:56.988796 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:59.989953 kubelet[2887]: E0120 02:29:59.985266 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:13.993207 kubelet[2887]: E0120 02:30:13.993158 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:19.018516 kubelet[2887]: E0120 02:30:18.982180 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:30.990996 kubelet[2887]: E0120 02:30:30.982516 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:58.990096 kubelet[2887]: E0120 02:30:58.989134 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:00.007981 kubelet[2887]: E0120 02:31:00.007801 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:13.001518 kubelet[2887]: E0120 02:31:12.994493 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:18.983810 kubelet[2887]: E0120 02:31:18.983386 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:22.990216 kubelet[2887]: E0120 02:31:22.986755 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:40.984420 kubelet[2887]: E0120 02:31:40.984104 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:49.998645 kubelet[2887]: E0120 02:31:49.986264 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:07.995210 kubelet[2887]: E0120 02:32:07.994661 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:20.997671 kubelet[2887]: E0120 02:32:20.991895 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:23.959242 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:57234.service - OpenSSH per-connection server daemon (10.0.0.1:57234). Jan 20 02:32:24.698384 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 57234 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:32:24.728193 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:24.797819 systemd-logind[1583]: New session 8 of user core. Jan 20 02:32:24.828964 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 02:32:26.306753 sshd[4474]: Connection closed by 10.0.0.1 port 57234 Jan 20 02:32:26.310082 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:26.374926 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:57234.service: Deactivated successfully. Jan 20 02:32:26.377845 systemd-logind[1583]: Session 8 logged out. Waiting for processes to exit. Jan 20 02:32:26.409334 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 02:32:26.437364 systemd-logind[1583]: Removed session 8. Jan 20 02:32:31.395196 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:57154.service - OpenSSH per-connection server daemon (10.0.0.1:57154). Jan 20 02:32:31.974763 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 57154 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:32:31.984914 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:32.071789 systemd-logind[1583]: New session 9 of user core. Jan 20 02:32:32.106779 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 02:32:33.494525 sshd[4514]: Connection closed by 10.0.0.1 port 57154 Jan 20 02:32:33.501850 sshd-session[4511]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:33.512017 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:57154.service: Deactivated successfully. Jan 20 02:32:33.513584 systemd-logind[1583]: Session 9 logged out. Waiting for processes to exit. Jan 20 02:32:33.517900 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 02:32:33.524268 systemd-logind[1583]: Removed session 9. Jan 20 02:32:36.994350 kubelet[2887]: E0120 02:32:36.982545 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:43.801899 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:46866.service - OpenSSH per-connection server daemon (10.0.0.1:46866). Jan 20 02:32:44.303738 kubelet[2887]: E0120 02:32:44.297841 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:44.772882 sshd[4556]: Accepted publickey for core from 10.0.0.1 port 46866 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:32:44.872430 sshd-session[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:45.070090 systemd-logind[1583]: New session 10 of user core. Jan 20 02:32:45.121234 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 02:32:46.656356 sshd[4579]: Connection closed by 10.0.0.1 port 46866 Jan 20 02:32:46.647132 sshd-session[4556]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:46.697002 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:46866.service: Deactivated successfully. Jan 20 02:32:46.712983 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 02:32:46.740160 systemd-logind[1583]: Session 10 logged out. Waiting for processes to exit. Jan 20 02:32:46.742194 systemd-logind[1583]: Removed session 10. Jan 20 02:32:49.992550 kubelet[2887]: E0120 02:32:49.984185 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:52.603001 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:42508.service - OpenSSH per-connection server daemon (10.0.0.1:42508). Jan 20 02:32:53.048291 sshd[4615]: Accepted publickey for core from 10.0.0.1 port 42508 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:32:53.060122 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:53.126879 systemd-logind[1583]: New session 11 of user core. Jan 20 02:32:53.212386 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 02:32:53.219925 containerd[1598]: time="2026-01-20T02:32:53.219561176Z" level=warning msg="container event discarded" container=e65765ca8fc636e317b2a7ce39254d601adcaa1ad72538c8936f098ae40d3b97 type=CONTAINER_CREATED_EVENT Jan 20 02:32:53.239507 containerd[1598]: time="2026-01-20T02:32:53.238126424Z" level=warning msg="container event discarded" container=e65765ca8fc636e317b2a7ce39254d601adcaa1ad72538c8936f098ae40d3b97 type=CONTAINER_STARTED_EVENT Jan 20 02:32:53.636840 containerd[1598]: time="2026-01-20T02:32:53.627851295Z" level=warning msg="container event discarded" container=fde323a45d8139473ef07f634aa84a7af73267c93abaa521494a47fc041e01ef type=CONTAINER_CREATED_EVENT Jan 20 02:32:53.636840 containerd[1598]: time="2026-01-20T02:32:53.627922248Z" level=warning msg="container event discarded" container=fde323a45d8139473ef07f634aa84a7af73267c93abaa521494a47fc041e01ef type=CONTAINER_STARTED_EVENT Jan 20 02:32:53.738882 containerd[1598]: time="2026-01-20T02:32:53.727868072Z" level=warning msg="container event discarded" container=4db5d2e2a3611e58826ce2199e7562ebed71c30806983c6fcd368de7dd8471e5 type=CONTAINER_CREATED_EVENT Jan 20 02:32:53.738882 containerd[1598]: time="2026-01-20T02:32:53.727941840Z" level=warning msg="container event discarded" container=4db5d2e2a3611e58826ce2199e7562ebed71c30806983c6fcd368de7dd8471e5 type=CONTAINER_STARTED_EVENT Jan 20 02:32:53.836893 containerd[1598]: time="2026-01-20T02:32:53.824323131Z" level=warning msg="container event discarded" container=29ae5e27f92c218d13797c2aa357dad54c6049f4fedb72fbe4a971b15dd6af72 type=CONTAINER_CREATED_EVENT Jan 20 02:32:54.101243 containerd[1598]: time="2026-01-20T02:32:54.101144823Z" level=warning msg="container event discarded" container=244afda0cafd29b0723f9c2d0c810493eb91ac06cb6f4ad2c7a63e21bf658e1f type=CONTAINER_CREATED_EVENT Jan 20 02:32:54.120100 containerd[1598]: time="2026-01-20T02:32:54.119893722Z" level=warning msg="container event discarded" container=b79845d8f6f3106ee59e663014f88b2c9f6c79888b8e7d4210740dbc1d79a29d type=CONTAINER_CREATED_EVENT Jan 20 02:32:54.712948 sshd[4618]: Connection closed by 10.0.0.1 port 42508 Jan 20 02:32:54.714785 sshd-session[4615]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:54.790855 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:42508.service: Deactivated successfully. Jan 20 02:32:54.849808 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 02:32:55.125993 systemd-logind[1583]: Session 11 logged out. Waiting for processes to exit. Jan 20 02:32:55.170836 systemd-logind[1583]: Removed session 11. Jan 20 02:32:55.190301 kubelet[2887]: E0120 02:32:55.175683 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:55.191224 kubelet[2887]: E0120 02:32:55.190552 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:55.642342 containerd[1598]: time="2026-01-20T02:32:55.633540603Z" level=warning msg="container event discarded" container=29ae5e27f92c218d13797c2aa357dad54c6049f4fedb72fbe4a971b15dd6af72 type=CONTAINER_STARTED_EVENT Jan 20 02:32:55.808822 containerd[1598]: time="2026-01-20T02:32:55.808725748Z" level=warning msg="container event discarded" container=244afda0cafd29b0723f9c2d0c810493eb91ac06cb6f4ad2c7a63e21bf658e1f type=CONTAINER_STARTED_EVENT Jan 20 02:32:56.472243 containerd[1598]: time="2026-01-20T02:32:56.470244659Z" level=warning msg="container event discarded" container=b79845d8f6f3106ee59e663014f88b2c9f6c79888b8e7d4210740dbc1d79a29d type=CONTAINER_STARTED_EVENT Jan 20 02:32:59.760665 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:36478.service - OpenSSH per-connection server daemon (10.0.0.1:36478). Jan 20 02:33:00.300009 sshd[4653]: Accepted publickey for core from 10.0.0.1 port 36478 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:00.341786 sshd-session[4653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:00.431148 systemd-logind[1583]: New session 12 of user core. Jan 20 02:33:00.485848 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 02:33:01.817581 sshd[4662]: Connection closed by 10.0.0.1 port 36478 Jan 20 02:33:01.826930 sshd-session[4653]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:01.879012 systemd-logind[1583]: Session 12 logged out. Waiting for processes to exit. Jan 20 02:33:01.896225 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:36478.service: Deactivated successfully. Jan 20 02:33:01.954784 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 02:33:01.963787 systemd-logind[1583]: Removed session 12. Jan 20 02:33:06.898240 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:58366.service - OpenSSH per-connection server daemon (10.0.0.1:58366). Jan 20 02:33:07.236016 sshd[4712]: Accepted publickey for core from 10.0.0.1 port 58366 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:07.241280 sshd-session[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:07.300544 systemd-logind[1583]: New session 13 of user core. Jan 20 02:33:07.318755 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 02:33:08.262393 sshd[4715]: Connection closed by 10.0.0.1 port 58366 Jan 20 02:33:08.267766 sshd-session[4712]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:08.337246 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:58366.service: Deactivated successfully. Jan 20 02:33:08.393942 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 02:33:08.449382 systemd-logind[1583]: Session 13 logged out. Waiting for processes to exit. Jan 20 02:33:08.486266 systemd-logind[1583]: Removed session 13. Jan 20 02:33:13.341535 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:58430.service - OpenSSH per-connection server daemon (10.0.0.1:58430). Jan 20 02:33:13.783406 sshd[4751]: Accepted publickey for core from 10.0.0.1 port 58430 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:13.795749 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:13.892721 systemd-logind[1583]: New session 14 of user core. Jan 20 02:33:13.909944 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 02:33:14.873007 sshd[4754]: Connection closed by 10.0.0.1 port 58430 Jan 20 02:33:14.871058 sshd-session[4751]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:14.901628 systemd-logind[1583]: Session 14 logged out. Waiting for processes to exit. Jan 20 02:33:14.915203 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:58430.service: Deactivated successfully. Jan 20 02:33:14.928562 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 02:33:14.940539 systemd-logind[1583]: Removed session 14. Jan 20 02:33:19.992217 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:36034.service - OpenSSH per-connection server daemon (10.0.0.1:36034). Jan 20 02:33:20.674099 sshd[4789]: Accepted publickey for core from 10.0.0.1 port 36034 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:20.677590 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:20.745023 systemd-logind[1583]: New session 15 of user core. Jan 20 02:33:20.795719 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 02:33:21.785104 sshd[4798]: Connection closed by 10.0.0.1 port 36034 Jan 20 02:33:21.787269 sshd-session[4789]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:21.828671 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:36034.service: Deactivated successfully. Jan 20 02:33:21.866385 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 02:33:21.918494 systemd-logind[1583]: Session 15 logged out. Waiting for processes to exit. Jan 20 02:33:21.983202 systemd-logind[1583]: Removed session 15. Jan 20 02:33:26.069647 kubelet[2887]: E0120 02:33:26.039825 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:26.866979 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:58092.service - OpenSSH per-connection server daemon (10.0.0.1:58092). Jan 20 02:33:27.358257 sshd[4832]: Accepted publickey for core from 10.0.0.1 port 58092 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:27.381852 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:27.448647 systemd-logind[1583]: New session 16 of user core. Jan 20 02:33:27.484716 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 02:33:28.442736 sshd[4835]: Connection closed by 10.0.0.1 port 58092 Jan 20 02:33:28.444046 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:28.468202 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:58092.service: Deactivated successfully. Jan 20 02:33:28.488906 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 02:33:28.540872 systemd-logind[1583]: Session 16 logged out. Waiting for processes to exit. Jan 20 02:33:28.553098 systemd-logind[1583]: Removed session 16. Jan 20 02:33:33.503535 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:58130.service - OpenSSH per-connection server daemon (10.0.0.1:58130). Jan 20 02:33:34.017943 sshd[4885]: Accepted publickey for core from 10.0.0.1 port 58130 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:34.023360 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:34.103650 systemd-logind[1583]: New session 17 of user core. Jan 20 02:33:34.115666 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 02:33:35.171500 sshd[4888]: Connection closed by 10.0.0.1 port 58130 Jan 20 02:33:35.173825 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:35.260708 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:58130.service: Deactivated successfully. Jan 20 02:33:35.290236 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 02:33:35.307559 systemd-logind[1583]: Session 17 logged out. Waiting for processes to exit. Jan 20 02:33:35.346934 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:53582.service - OpenSSH per-connection server daemon (10.0.0.1:53582). Jan 20 02:33:35.352888 systemd-logind[1583]: Removed session 17. Jan 20 02:33:35.485839 sshd[4902]: Accepted publickey for core from 10.0.0.1 port 53582 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:35.492646 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:35.522260 systemd-logind[1583]: New session 18 of user core. Jan 20 02:33:35.532018 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 02:33:36.527040 sshd[4905]: Connection closed by 10.0.0.1 port 53582 Jan 20 02:33:36.527751 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:36.602904 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:53582.service: Deactivated successfully. Jan 20 02:33:36.617820 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 02:33:36.625544 systemd-logind[1583]: Session 18 logged out. Waiting for processes to exit. Jan 20 02:33:36.650633 systemd-logind[1583]: Removed session 18. Jan 20 02:33:36.675648 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:53590.service - OpenSSH per-connection server daemon (10.0.0.1:53590). Jan 20 02:33:37.039576 sshd[4925]: Accepted publickey for core from 10.0.0.1 port 53590 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:37.031718 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:37.090857 systemd-logind[1583]: New session 19 of user core. Jan 20 02:33:37.129264 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 02:33:37.783379 containerd[1598]: time="2026-01-20T02:33:37.783236064Z" level=warning msg="container event discarded" container=e524f65fc229660fb4f0aa48c695621c07d8ca988064e324fd4fe0e3965eb522 type=CONTAINER_CREATED_EVENT Jan 20 02:33:37.783379 containerd[1598]: time="2026-01-20T02:33:37.783336801Z" level=warning msg="container event discarded" container=e524f65fc229660fb4f0aa48c695621c07d8ca988064e324fd4fe0e3965eb522 type=CONTAINER_STARTED_EVENT Jan 20 02:33:37.961139 containerd[1598]: time="2026-01-20T02:33:37.960920856Z" level=warning msg="container event discarded" container=682547be7471373135b7a28e4816ac931112c2def6ce803ea2b640bbec12a147 type=CONTAINER_CREATED_EVENT Jan 20 02:33:38.221556 sshd[4928]: Connection closed by 10.0.0.1 port 53590 Jan 20 02:33:38.207853 sshd-session[4925]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:38.273585 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:53590.service: Deactivated successfully. Jan 20 02:33:38.325183 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 02:33:38.405693 systemd-logind[1583]: Session 19 logged out. Waiting for processes to exit. Jan 20 02:33:38.424797 systemd-logind[1583]: Removed session 19. Jan 20 02:33:38.660624 containerd[1598]: time="2026-01-20T02:33:38.660381987Z" level=warning msg="container event discarded" container=682547be7471373135b7a28e4816ac931112c2def6ce803ea2b640bbec12a147 type=CONTAINER_STARTED_EVENT Jan 20 02:33:42.020098 kubelet[2887]: E0120 02:33:42.012810 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:42.894930 containerd[1598]: time="2026-01-20T02:33:42.894809177Z" level=warning msg="container event discarded" container=8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63 type=CONTAINER_CREATED_EVENT Jan 20 02:33:42.894930 containerd[1598]: time="2026-01-20T02:33:42.894887874Z" level=warning msg="container event discarded" container=8a7c7d7425ea892c8c998a3ce669ea7b2811e85c74a342e6e77aaf7467876f63 type=CONTAINER_STARTED_EVENT Jan 20 02:33:43.280666 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:53602.service - OpenSSH per-connection server daemon (10.0.0.1:53602). Jan 20 02:33:43.636115 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 53602 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:43.648334 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:43.735411 systemd-logind[1583]: New session 20 of user core. Jan 20 02:33:43.743421 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 02:33:44.790492 sshd[4968]: Connection closed by 10.0.0.1 port 53602 Jan 20 02:33:44.800860 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:44.840773 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:53602.service: Deactivated successfully. Jan 20 02:33:44.874975 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 02:33:44.911382 systemd-logind[1583]: Session 20 logged out. Waiting for processes to exit. Jan 20 02:33:44.927893 systemd-logind[1583]: Removed session 20. Jan 20 02:33:49.842113 systemd[1]: Started sshd@20-10.0.0.99:22-10.0.0.1:37200.service - OpenSSH per-connection server daemon (10.0.0.1:37200). Jan 20 02:33:50.154618 containerd[1598]: time="2026-01-20T02:33:50.154361672Z" level=warning msg="container event discarded" container=c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135 type=CONTAINER_CREATED_EVENT Jan 20 02:33:50.158017 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 37200 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:50.165488 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:50.228104 systemd-logind[1583]: New session 21 of user core. Jan 20 02:33:50.267752 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 02:33:51.131590 containerd[1598]: time="2026-01-20T02:33:51.131497154Z" level=warning msg="container event discarded" container=c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135 type=CONTAINER_STARTED_EVENT Jan 20 02:33:51.420549 sshd[5017]: Connection closed by 10.0.0.1 port 37200 Jan 20 02:33:51.419806 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:51.435813 systemd-logind[1583]: Session 21 logged out. Waiting for processes to exit. Jan 20 02:33:51.460642 systemd[1]: sshd@20-10.0.0.99:22-10.0.0.1:37200.service: Deactivated successfully. Jan 20 02:33:51.487040 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 02:33:51.529223 systemd-logind[1583]: Removed session 21. Jan 20 02:33:51.712285 containerd[1598]: time="2026-01-20T02:33:51.710892607Z" level=warning msg="container event discarded" container=c13121affa0148e48612c0e1a4155b695f311ea710a7642d9fbb0dd965f45135 type=CONTAINER_STOPPED_EVENT Jan 20 02:33:55.987503 kubelet[2887]: E0120 02:33:55.987277 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:55.992501 kubelet[2887]: E0120 02:33:55.992387 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:56.538882 systemd[1]: Started sshd@21-10.0.0.99:22-10.0.0.1:45842.service - OpenSSH per-connection server daemon (10.0.0.1:45842). Jan 20 02:33:57.122267 sshd[5056]: Accepted publickey for core from 10.0.0.1 port 45842 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:33:57.136866 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:57.194395 systemd-logind[1583]: New session 22 of user core. Jan 20 02:33:57.229514 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 02:33:57.915210 sshd[5060]: Connection closed by 10.0.0.1 port 45842 Jan 20 02:33:57.918805 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:57.942953 systemd[1]: sshd@21-10.0.0.99:22-10.0.0.1:45842.service: Deactivated successfully. Jan 20 02:33:57.966815 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 02:33:58.002603 systemd-logind[1583]: Session 22 logged out. Waiting for processes to exit. Jan 20 02:33:58.019307 systemd-logind[1583]: Removed session 22. Jan 20 02:34:03.046899 systemd[1]: Started sshd@22-10.0.0.99:22-10.0.0.1:45872.service - OpenSSH per-connection server daemon (10.0.0.1:45872). Jan 20 02:34:03.478033 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 45872 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:34:03.491965 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:03.575393 systemd-logind[1583]: New session 23 of user core. Jan 20 02:34:03.602106 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 02:34:03.983529 kubelet[2887]: E0120 02:34:03.982998 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:04.598074 sshd[5097]: Connection closed by 10.0.0.1 port 45872 Jan 20 02:34:04.580898 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:04.603889 systemd[1]: sshd@22-10.0.0.99:22-10.0.0.1:45872.service: Deactivated successfully. Jan 20 02:34:04.629429 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 02:34:04.670260 systemd-logind[1583]: Session 23 logged out. Waiting for processes to exit. Jan 20 02:34:04.672926 systemd-logind[1583]: Removed session 23. Jan 20 02:34:06.317877 containerd[1598]: time="2026-01-20T02:34:06.317781500Z" level=warning msg="container event discarded" container=844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60 type=CONTAINER_CREATED_EVENT Jan 20 02:34:07.490402 containerd[1598]: time="2026-01-20T02:34:07.490123833Z" level=warning msg="container event discarded" container=844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60 type=CONTAINER_STARTED_EVENT Jan 20 02:34:09.645783 systemd[1]: Started sshd@23-10.0.0.99:22-10.0.0.1:51488.service - OpenSSH per-connection server daemon (10.0.0.1:51488). Jan 20 02:34:09.776678 containerd[1598]: time="2026-01-20T02:34:09.776538404Z" level=warning msg="container event discarded" container=844c22231aa4c54f7da2dcd58962a3cc138b67194aa1e5a4d63f928c6a504f60 type=CONTAINER_STOPPED_EVENT Jan 20 02:34:10.136947 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 51488 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:34:10.155996 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:10.241562 systemd-logind[1583]: New session 24 of user core. Jan 20 02:34:10.264805 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 02:34:10.274674 containerd[1598]: time="2026-01-20T02:34:10.274373394Z" level=warning msg="container event discarded" container=b557a2fb598c375d1bd6889bc25a7f058da01c7049886a3498c25a349169539e type=CONTAINER_CREATED_EVENT Jan 20 02:34:10.831057 containerd[1598]: time="2026-01-20T02:34:10.830931927Z" level=warning msg="container event discarded" container=b557a2fb598c375d1bd6889bc25a7f058da01c7049886a3498c25a349169539e type=CONTAINER_STARTED_EVENT Jan 20 02:34:11.385774 sshd[5135]: Connection closed by 10.0.0.1 port 51488 Jan 20 02:34:11.393622 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:11.425911 systemd[1]: sshd@23-10.0.0.99:22-10.0.0.1:51488.service: Deactivated successfully. Jan 20 02:34:11.452002 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 02:34:11.485667 systemd-logind[1583]: Session 24 logged out. Waiting for processes to exit. Jan 20 02:34:11.513696 systemd-logind[1583]: Removed session 24. Jan 20 02:34:12.983954 kubelet[2887]: E0120 02:34:12.981737 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:12.992166 kubelet[2887]: E0120 02:34:12.991588 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:16.466043 systemd[1]: Started sshd@24-10.0.0.99:22-10.0.0.1:49366.service - OpenSSH per-connection server daemon (10.0.0.1:49366). Jan 20 02:34:16.839963 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 49366 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:34:16.845269 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:16.905026 systemd-logind[1583]: New session 25 of user core. Jan 20 02:34:16.927020 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 02:34:17.816394 sshd[5186]: Connection closed by 10.0.0.1 port 49366 Jan 20 02:34:17.815799 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:17.839525 systemd-logind[1583]: Session 25 logged out. Waiting for processes to exit. Jan 20 02:34:17.842586 systemd[1]: sshd@24-10.0.0.99:22-10.0.0.1:49366.service: Deactivated successfully. Jan 20 02:34:17.880166 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 02:34:17.888323 systemd-logind[1583]: Removed session 25. Jan 20 02:34:22.560600 containerd[1598]: time="2026-01-20T02:34:22.560427598Z" level=warning msg="container event discarded" container=9f57e188dad4a41865afc03dc28b6ef044e982f47bad4979fe530b6716286d08 type=CONTAINER_CREATED_EVENT Jan 20 02:34:22.560600 containerd[1598]: time="2026-01-20T02:34:22.560559213Z" level=warning msg="container event discarded" container=9f57e188dad4a41865afc03dc28b6ef044e982f47bad4979fe530b6716286d08 type=CONTAINER_STARTED_EVENT Jan 20 02:34:22.777534 containerd[1598]: time="2026-01-20T02:34:22.775527255Z" level=warning msg="container event discarded" container=3ac38f47584f56ec5519fb1a5a32ff3b55d09ee60fac74bfd00a093eecc651fc type=CONTAINER_CREATED_EVENT Jan 20 02:34:22.917842 systemd[1]: Started sshd@25-10.0.0.99:22-10.0.0.1:49374.service - OpenSSH per-connection server daemon (10.0.0.1:49374). Jan 20 02:34:23.284312 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 49374 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:34:23.288084 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:23.312516 containerd[1598]: time="2026-01-20T02:34:23.311522273Z" level=warning msg="container event discarded" container=3ac38f47584f56ec5519fb1a5a32ff3b55d09ee60fac74bfd00a093eecc651fc type=CONTAINER_STARTED_EVENT Jan 20 02:34:23.320094 systemd-logind[1583]: New session 26 of user core. Jan 20 02:34:23.341507 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 02:34:24.144101 sshd[5228]: Connection closed by 10.0.0.1 port 49374 Jan 20 02:34:24.147757 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:24.203095 systemd[1]: sshd@25-10.0.0.99:22-10.0.0.1:49374.service: Deactivated successfully. Jan 20 02:34:24.238916 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 02:34:24.285187 systemd-logind[1583]: Session 26 logged out. Waiting for processes to exit. Jan 20 02:34:24.289529 systemd-logind[1583]: Removed session 26. Jan 20 02:34:25.095683 containerd[1598]: time="2026-01-20T02:34:25.095161045Z" level=warning msg="container event discarded" container=f1faa4e1f47a144ff7834747c329c01bfead79119ad16167eafedc90a9bc2241 type=CONTAINER_CREATED_EVENT Jan 20 02:34:25.095683 containerd[1598]: time="2026-01-20T02:34:25.095349626Z" level=warning msg="container event discarded" container=f1faa4e1f47a144ff7834747c329c01bfead79119ad16167eafedc90a9bc2241 type=CONTAINER_STARTED_EVENT Jan 20 02:34:25.323146 containerd[1598]: time="2026-01-20T02:34:25.322823776Z" level=warning msg="container event discarded" container=645dc74e609dc292c46922eafd32a7d2bbf4889c719ab00e7d616081d479c0bf type=CONTAINER_CREATED_EVENT Jan 20 02:34:26.344073 containerd[1598]: time="2026-01-20T02:34:26.326071298Z" level=warning msg="container event discarded" container=645dc74e609dc292c46922eafd32a7d2bbf4889c719ab00e7d616081d479c0bf type=CONTAINER_STARTED_EVENT Jan 20 02:34:29.256920 systemd[1]: Started sshd@26-10.0.0.99:22-10.0.0.1:39176.service - OpenSSH per-connection server daemon (10.0.0.1:39176). Jan 20 02:34:29.671275 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 39176 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:34:29.694699 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:29.771918 systemd-logind[1583]: New session 27 of user core. Jan 20 02:34:29.816051 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 02:34:30.905761 sshd[5265]: Connection closed by 10.0.0.1 port 39176 Jan 20 02:34:30.909845 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:30.969864 systemd[1]: sshd@26-10.0.0.99:22-10.0.0.1:39176.service: Deactivated successfully. Jan 20 02:34:31.006139 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 02:34:31.040105 systemd-logind[1583]: Session 27 logged out. Waiting for processes to exit. Jan 20 02:34:31.075867 systemd-logind[1583]: Removed session 27. Jan 20 02:34:34.983919 kubelet[2887]: E0120 02:34:34.983095 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:36.019547 systemd[1]: Started sshd@27-10.0.0.99:22-10.0.0.1:53726.service - OpenSSH per-connection server daemon (10.0.0.1:53726). Jan 20 02:34:36.530039 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 53726 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:34:36.533340 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:36.595755 systemd-logind[1583]: New session 28 of user core. Jan 20 02:34:36.639829 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 02:34:37.494479 sshd[5305]: Connection closed by 10.0.0.1 port 53726 Jan 20 02:34:37.495199 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:37.530515 systemd[1]: sshd@27-10.0.0.99:22-10.0.0.1:53726.service: Deactivated successfully. Jan 20 02:34:37.552095 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 02:34:37.557662 systemd-logind[1583]: Session 28 logged out. Waiting for processes to exit. Jan 20 02:34:37.642102 systemd-logind[1583]: Removed session 28. Jan 20 02:34:42.580420 systemd[1]: Started sshd@28-10.0.0.99:22-10.0.0.1:53742.service - OpenSSH per-connection server daemon (10.0.0.1:53742). Jan 20 02:34:42.931284 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 53742 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:34:42.943506 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:43.041984 systemd-logind[1583]: New session 29 of user core. Jan 20 02:34:43.080736 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 02:34:43.896493 sshd[5364]: Connection closed by 10.0.0.1 port 53742 Jan 20 02:34:43.902624 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:43.928551 systemd[1]: sshd@28-10.0.0.99:22-10.0.0.1:53742.service: Deactivated successfully. Jan 20 02:34:44.022222 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 02:34:44.041668 systemd-logind[1583]: Session 29 logged out. Waiting for processes to exit. Jan 20 02:34:44.326545 systemd-logind[1583]: Removed session 29. Jan 20 02:34:48.933587 systemd[1]: Started sshd@29-10.0.0.99:22-10.0.0.1:51832.service - OpenSSH per-connection server daemon (10.0.0.1:51832). Jan 20 02:34:49.286119 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 51832 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:34:49.289951 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:49.428491 systemd-logind[1583]: New session 30 of user core. Jan 20 02:34:49.461164 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:34:50.363512 sshd[5400]: Connection closed by 10.0.0.1 port 51832 Jan 20 02:34:50.366753 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:50.422812 systemd[1]: sshd@29-10.0.0.99:22-10.0.0.1:51832.service: Deactivated successfully. Jan 20 02:34:50.485298 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:34:50.518771 systemd-logind[1583]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:34:50.546512 systemd-logind[1583]: Removed session 30. Jan 20 02:34:55.465652 systemd[1]: Started sshd@30-10.0.0.99:22-10.0.0.1:57200.service - OpenSSH per-connection server daemon (10.0.0.1:57200). Jan 20 02:34:56.119864 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 57200 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:34:56.132964 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:56.211521 systemd-logind[1583]: New session 31 of user core. Jan 20 02:34:56.254752 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:34:56.991908 kubelet[2887]: E0120 02:34:56.990958 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:57.602939 sshd[5437]: Connection closed by 10.0.0.1 port 57200 Jan 20 02:34:57.606056 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:57.630665 systemd[1]: sshd@30-10.0.0.99:22-10.0.0.1:57200.service: Deactivated successfully. Jan 20 02:34:57.646593 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:34:57.662867 systemd-logind[1583]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:34:57.695353 systemd-logind[1583]: Removed session 31. Jan 20 02:34:58.043629 kubelet[2887]: E0120 02:34:58.038152 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:02.700064 systemd[1]: Started sshd@31-10.0.0.99:22-10.0.0.1:57218.service - OpenSSH per-connection server daemon (10.0.0.1:57218). Jan 20 02:35:03.074882 sshd[5472]: Accepted publickey for core from 10.0.0.1 port 57218 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:03.084317 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:03.146765 systemd-logind[1583]: New session 32 of user core. Jan 20 02:35:03.219118 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:35:04.817806 sshd[5481]: Connection closed by 10.0.0.1 port 57218 Jan 20 02:35:04.838333 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:04.957393 systemd[1]: sshd@31-10.0.0.99:22-10.0.0.1:57218.service: Deactivated successfully. Jan 20 02:35:04.970055 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:35:04.985685 systemd-logind[1583]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:35:05.041860 systemd[1]: Started sshd@32-10.0.0.99:22-10.0.0.1:44114.service - OpenSSH per-connection server daemon (10.0.0.1:44114). Jan 20 02:35:05.119390 systemd-logind[1583]: Removed session 32. Jan 20 02:35:05.792585 sshd[5509]: Accepted publickey for core from 10.0.0.1 port 44114 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:05.803198 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:05.917109 systemd-logind[1583]: New session 33 of user core. Jan 20 02:35:05.969059 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:35:08.004337 kubelet[2887]: E0120 02:35:08.004217 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:08.211750 sshd[5512]: Connection closed by 10.0.0.1 port 44114 Jan 20 02:35:08.217696 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:08.291069 systemd[1]: sshd@32-10.0.0.99:22-10.0.0.1:44114.service: Deactivated successfully. Jan 20 02:35:08.312377 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:35:08.332772 systemd-logind[1583]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:35:08.439686 systemd[1]: Started sshd@33-10.0.0.99:22-10.0.0.1:44146.service - OpenSSH per-connection server daemon (10.0.0.1:44146). Jan 20 02:35:08.452830 systemd-logind[1583]: Removed session 33. Jan 20 02:35:09.007592 sshd[5529]: Accepted publickey for core from 10.0.0.1 port 44146 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:09.020099 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:09.111251 systemd-logind[1583]: New session 34 of user core. Jan 20 02:35:09.143143 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:35:13.431656 sshd[5533]: Connection closed by 10.0.0.1 port 44146 Jan 20 02:35:13.423887 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:13.486414 systemd[1]: sshd@33-10.0.0.99:22-10.0.0.1:44146.service: Deactivated successfully. Jan 20 02:35:13.501823 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:35:13.506371 systemd-logind[1583]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:35:13.522184 systemd[1]: Started sshd@34-10.0.0.99:22-10.0.0.1:44166.service - OpenSSH per-connection server daemon (10.0.0.1:44166). Jan 20 02:35:13.531122 systemd-logind[1583]: Removed session 34. Jan 20 02:35:13.775073 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 44166 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:13.795002 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:13.874164 systemd-logind[1583]: New session 35 of user core. Jan 20 02:35:13.946836 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:35:15.669869 sshd[5581]: Connection closed by 10.0.0.1 port 44166 Jan 20 02:35:15.698617 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:15.748344 systemd[1]: Started sshd@35-10.0.0.99:22-10.0.0.1:42674.service - OpenSSH per-connection server daemon (10.0.0.1:42674). Jan 20 02:35:15.751809 systemd[1]: sshd@34-10.0.0.99:22-10.0.0.1:44166.service: Deactivated successfully. Jan 20 02:35:15.760975 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:35:15.778699 systemd-logind[1583]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:35:15.784840 systemd-logind[1583]: Removed session 35. Jan 20 02:35:16.036974 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 42674 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:16.046786 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:16.075844 systemd-logind[1583]: New session 36 of user core. Jan 20 02:35:16.094147 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:35:16.942272 sshd[5610]: Connection closed by 10.0.0.1 port 42674 Jan 20 02:35:16.943872 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:16.980022 systemd[1]: sshd@35-10.0.0.99:22-10.0.0.1:42674.service: Deactivated successfully. Jan 20 02:35:17.001164 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:35:17.033737 systemd-logind[1583]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:35:17.052148 systemd-logind[1583]: Removed session 36. Jan 20 02:35:22.022204 systemd[1]: Started sshd@36-10.0.0.99:22-10.0.0.1:42716.service - OpenSSH per-connection server daemon (10.0.0.1:42716). Jan 20 02:35:22.476144 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 42716 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:22.504165 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:22.626848 systemd-logind[1583]: New session 37 of user core. Jan 20 02:35:22.662059 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:35:22.997411 kubelet[2887]: E0120 02:35:22.997296 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:24.164736 kubelet[2887]: E0120 02:35:24.049494 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:24.388632 sshd[5647]: Connection closed by 10.0.0.1 port 42716 Jan 20 02:35:24.388833 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:24.424247 systemd[1]: sshd@36-10.0.0.99:22-10.0.0.1:42716.service: Deactivated successfully. Jan 20 02:35:24.449023 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:35:24.486406 systemd-logind[1583]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:35:24.529058 systemd-logind[1583]: Removed session 37. Jan 20 02:35:29.509410 systemd[1]: Started sshd@37-10.0.0.99:22-10.0.0.1:56554.service - OpenSSH per-connection server daemon (10.0.0.1:56554). Jan 20 02:35:30.077621 sshd[5688]: Accepted publickey for core from 10.0.0.1 port 56554 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:30.097407 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:30.241886 systemd-logind[1583]: New session 38 of user core. Jan 20 02:35:30.291327 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:35:30.985496 kubelet[2887]: E0120 02:35:30.985296 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:31.109643 sshd[5692]: Connection closed by 10.0.0.1 port 56554 Jan 20 02:35:31.103783 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:31.117585 systemd[1]: sshd@37-10.0.0.99:22-10.0.0.1:56554.service: Deactivated successfully. Jan 20 02:35:31.127870 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:35:31.142913 systemd-logind[1583]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:35:31.151365 systemd-logind[1583]: Removed session 38. Jan 20 02:35:36.312821 systemd[1]: Started sshd@38-10.0.0.99:22-10.0.0.1:59782.service - OpenSSH per-connection server daemon (10.0.0.1:59782). Jan 20 02:35:36.677624 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 59782 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:36.684356 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:36.743693 systemd-logind[1583]: New session 39 of user core. Jan 20 02:35:36.791671 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:35:37.891022 sshd[5744]: Connection closed by 10.0.0.1 port 59782 Jan 20 02:35:37.906743 sshd-session[5737]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:37.974287 systemd[1]: sshd@38-10.0.0.99:22-10.0.0.1:59782.service: Deactivated successfully. Jan 20 02:35:37.991991 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:35:38.024865 systemd-logind[1583]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:35:38.067625 systemd-logind[1583]: Removed session 39. Jan 20 02:35:43.009853 systemd[1]: Started sshd@39-10.0.0.99:22-10.0.0.1:59790.service - OpenSSH per-connection server daemon (10.0.0.1:59790). Jan 20 02:35:43.559421 sshd[5780]: Accepted publickey for core from 10.0.0.1 port 59790 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:43.563426 sshd-session[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:43.627600 systemd-logind[1583]: New session 40 of user core. Jan 20 02:35:43.656938 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:35:44.700666 sshd[5783]: Connection closed by 10.0.0.1 port 59790 Jan 20 02:35:44.702908 sshd-session[5780]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:44.745425 systemd[1]: sshd@39-10.0.0.99:22-10.0.0.1:59790.service: Deactivated successfully. Jan 20 02:35:44.773123 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:35:44.799984 systemd-logind[1583]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:35:44.821593 systemd-logind[1583]: Removed session 40. Jan 20 02:35:49.849554 systemd[1]: Started sshd@40-10.0.0.99:22-10.0.0.1:49744.service - OpenSSH per-connection server daemon (10.0.0.1:49744). Jan 20 02:35:50.764297 sshd[5817]: Accepted publickey for core from 10.0.0.1 port 49744 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:50.796076 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:50.917851 systemd-logind[1583]: New session 41 of user core. Jan 20 02:35:50.997617 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:35:52.636643 sshd[5826]: Connection closed by 10.0.0.1 port 49744 Jan 20 02:35:52.641025 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:52.685300 systemd[1]: sshd@40-10.0.0.99:22-10.0.0.1:49744.service: Deactivated successfully. Jan 20 02:35:52.724635 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:35:52.772287 systemd-logind[1583]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:35:52.844223 systemd-logind[1583]: Removed session 41. Jan 20 02:35:57.721201 systemd[1]: Started sshd@41-10.0.0.99:22-10.0.0.1:51606.service - OpenSSH per-connection server daemon (10.0.0.1:51606). Jan 20 02:35:57.993356 kubelet[2887]: E0120 02:35:57.989984 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:58.157905 sshd[5861]: Accepted publickey for core from 10.0.0.1 port 51606 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:35:58.166067 sshd-session[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:58.232119 systemd-logind[1583]: New session 42 of user core. Jan 20 02:35:58.266244 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:35:59.223833 sshd[5878]: Connection closed by 10.0.0.1 port 51606 Jan 20 02:35:59.222785 sshd-session[5861]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:59.292802 systemd[1]: sshd@41-10.0.0.99:22-10.0.0.1:51606.service: Deactivated successfully. Jan 20 02:35:59.310751 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:35:59.370315 systemd-logind[1583]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:35:59.396610 systemd-logind[1583]: Removed session 42. Jan 20 02:36:04.299200 systemd[1]: Started sshd@42-10.0.0.99:22-10.0.0.1:51614.service - OpenSSH per-connection server daemon (10.0.0.1:51614). Jan 20 02:36:04.966810 sshd[5912]: Accepted publickey for core from 10.0.0.1 port 51614 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:36:04.989640 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:05.100378 systemd-logind[1583]: New session 43 of user core. Jan 20 02:36:05.172659 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:36:06.198883 sshd[5915]: Connection closed by 10.0.0.1 port 51614 Jan 20 02:36:06.219879 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:06.283308 systemd[1]: sshd@42-10.0.0.99:22-10.0.0.1:51614.service: Deactivated successfully. Jan 20 02:36:06.335025 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:36:06.358259 systemd-logind[1583]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:36:06.368681 systemd-logind[1583]: Removed session 43. Jan 20 02:36:11.248191 systemd[1]: Started sshd@43-10.0.0.99:22-10.0.0.1:50984.service - OpenSSH per-connection server daemon (10.0.0.1:50984). Jan 20 02:36:11.599345 sshd[5956]: Accepted publickey for core from 10.0.0.1 port 50984 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:36:11.598672 sshd-session[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:11.655962 systemd-logind[1583]: New session 44 of user core. Jan 20 02:36:11.691533 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 02:36:13.069654 sshd[5959]: Connection closed by 10.0.0.1 port 50984 Jan 20 02:36:13.071404 sshd-session[5956]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:13.103413 systemd[1]: sshd@43-10.0.0.99:22-10.0.0.1:50984.service: Deactivated successfully. Jan 20 02:36:13.145213 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 02:36:13.206119 systemd-logind[1583]: Session 44 logged out. Waiting for processes to exit. Jan 20 02:36:13.218863 systemd-logind[1583]: Removed session 44. Jan 20 02:36:18.150988 systemd[1]: Started sshd@44-10.0.0.99:22-10.0.0.1:41788.service - OpenSSH per-connection server daemon (10.0.0.1:41788). Jan 20 02:36:18.431993 sshd[5992]: Accepted publickey for core from 10.0.0.1 port 41788 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:36:18.439191 sshd-session[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:18.474609 systemd-logind[1583]: New session 45 of user core. Jan 20 02:36:18.497042 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 02:36:18.984975 kubelet[2887]: E0120 02:36:18.982669 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:18.997993 kubelet[2887]: E0120 02:36:18.993590 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:19.165900 sshd[5995]: Connection closed by 10.0.0.1 port 41788 Jan 20 02:36:19.163736 sshd-session[5992]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:19.205736 systemd-logind[1583]: Session 45 logged out. Waiting for processes to exit. Jan 20 02:36:19.212301 systemd[1]: sshd@44-10.0.0.99:22-10.0.0.1:41788.service: Deactivated successfully. Jan 20 02:36:19.245060 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 02:36:19.283936 systemd-logind[1583]: Removed session 45. Jan 20 02:36:24.260909 systemd[1]: Started sshd@45-10.0.0.99:22-10.0.0.1:41794.service - OpenSSH per-connection server daemon (10.0.0.1:41794). Jan 20 02:36:24.677874 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 41794 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:36:24.694200 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:24.733590 systemd-logind[1583]: New session 46 of user core. Jan 20 02:36:24.754727 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 02:36:25.303387 sshd[6045]: Connection closed by 10.0.0.1 port 41794 Jan 20 02:36:25.304704 sshd-session[6028]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:25.357993 systemd[1]: sshd@45-10.0.0.99:22-10.0.0.1:41794.service: Deactivated successfully. Jan 20 02:36:25.387637 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 02:36:25.396708 systemd-logind[1583]: Session 46 logged out. Waiting for processes to exit. Jan 20 02:36:25.414651 systemd-logind[1583]: Removed session 46. Jan 20 02:36:26.988131 kubelet[2887]: E0120 02:36:26.986718 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:30.489578 systemd[1]: Started sshd@46-10.0.0.99:22-10.0.0.1:36842.service - OpenSSH per-connection server daemon (10.0.0.1:36842). Jan 20 02:36:37.431378 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 36842 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:36:37.445503 kubelet[2887]: E0120 02:36:37.443426 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:37.444814 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:37.886222 kubelet[2887]: E0120 02:36:37.829662 2887 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.208s" Jan 20 02:36:37.871105 systemd-logind[1583]: New session 47 of user core. Jan 20 02:36:37.958702 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 02:36:38.140282 kubelet[2887]: E0120 02:36:38.121730 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:38.141619 kubelet[2887]: E0120 02:36:38.141509 2887 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:39.206511 sshd[6092]: Connection closed by 10.0.0.1 port 36842 Jan 20 02:36:39.207423 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:39.239573 systemd[1]: sshd@46-10.0.0.99:22-10.0.0.1:36842.service: Deactivated successfully. Jan 20 02:36:39.282861 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 02:36:39.312082 systemd-logind[1583]: Session 47 logged out. Waiting for processes to exit. Jan 20 02:36:39.326283 systemd-logind[1583]: Removed session 47. Jan 20 02:36:44.277365 systemd[1]: Started sshd@47-10.0.0.99:22-10.0.0.1:48254.service - OpenSSH per-connection server daemon (10.0.0.1:48254). Jan 20 02:36:44.531391 sshd[6141]: Accepted publickey for core from 10.0.0.1 port 48254 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:36:44.539120 sshd-session[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:44.568991 systemd-logind[1583]: New session 48 of user core. Jan 20 02:36:44.586706 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 02:36:45.255155 sshd[6144]: Connection closed by 10.0.0.1 port 48254 Jan 20 02:36:45.256554 sshd-session[6141]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:45.292269 systemd[1]: sshd@47-10.0.0.99:22-10.0.0.1:48254.service: Deactivated successfully. Jan 20 02:36:45.308373 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 02:36:45.317771 systemd-logind[1583]: Session 48 logged out. Waiting for processes to exit. Jan 20 02:36:45.340561 systemd-logind[1583]: Removed session 48. Jan 20 02:36:50.427039 systemd[1]: Started sshd@48-10.0.0.99:22-10.0.0.1:60600.service - OpenSSH per-connection server daemon (10.0.0.1:60600). Jan 20 02:36:51.104137 sshd[6177]: Accepted publickey for core from 10.0.0.1 port 60600 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:36:51.112221 sshd-session[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:51.214122 systemd-logind[1583]: New session 49 of user core. Jan 20 02:36:51.250370 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 02:36:52.356684 sshd[6180]: Connection closed by 10.0.0.1 port 60600 Jan 20 02:36:52.365827 sshd-session[6177]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:52.466343 systemd[1]: sshd@48-10.0.0.99:22-10.0.0.1:60600.service: Deactivated successfully. Jan 20 02:36:52.524530 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 02:36:52.537204 systemd-logind[1583]: Session 49 logged out. Waiting for processes to exit. Jan 20 02:36:52.568597 systemd-logind[1583]: Removed session 49. Jan 20 02:36:57.442915 systemd[1]: Started sshd@49-10.0.0.99:22-10.0.0.1:36124.service - OpenSSH per-connection server daemon (10.0.0.1:36124). Jan 20 02:36:57.930504 sshd[6213]: Accepted publickey for core from 10.0.0.1 port 36124 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:36:57.948184 sshd-session[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:58.006328 systemd-logind[1583]: New session 50 of user core. Jan 20 02:36:58.043038 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 02:36:59.408553 sshd[6216]: Connection closed by 10.0.0.1 port 36124 Jan 20 02:36:59.412880 sshd-session[6213]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:59.480502 systemd[1]: sshd@49-10.0.0.99:22-10.0.0.1:36124.service: Deactivated successfully. Jan 20 02:36:59.491639 systemd-logind[1583]: Session 50 logged out. Waiting for processes to exit. Jan 20 02:36:59.522868 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 02:36:59.554882 systemd-logind[1583]: Removed session 50. Jan 20 02:37:04.458814 systemd[1]: Started sshd@50-10.0.0.99:22-10.0.0.1:48610.service - OpenSSH per-connection server daemon (10.0.0.1:48610). Jan 20 02:37:04.978230 sshd[6255]: Accepted publickey for core from 10.0.0.1 port 48610 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 02:37:05.010502 sshd-session[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:05.062910 systemd-logind[1583]: New session 51 of user core. Jan 20 02:37:05.122669 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 02:37:05.848187 sshd[6270]: Connection closed by 10.0.0.1 port 48610 Jan 20 02:37:05.847723 sshd-session[6255]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:05.868263 systemd[1]: sshd@50-10.0.0.99:22-10.0.0.1:48610.service: Deactivated successfully. Jan 20 02:37:05.884544 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 02:37:05.893067 systemd-logind[1583]: Session 51 logged out. Waiting for processes to exit. Jan 20 02:37:05.906148 systemd-logind[1583]: Removed session 51.