Jan 20 01:26:51.411179 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 01:26:51.411319 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:26:51.411337 kernel: BIOS-provided physical RAM map: Jan 20 01:26:51.411345 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 01:26:51.411354 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 01:26:51.411364 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 01:26:51.411376 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 01:26:51.411385 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 01:26:51.411429 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 01:26:51.411441 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 01:26:51.411450 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:26:51.411463 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 01:26:51.411471 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:26:51.411480 kernel: NX (Execute Disable) protection: active Jan 20 01:26:51.411492 kernel: APIC: Static calls initialized Jan 20 01:26:51.411503 kernel: SMBIOS 2.8 present. Jan 20 01:26:51.411550 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 01:26:51.411563 kernel: DMI: Memory slots populated: 1/1 Jan 20 01:26:51.411571 kernel: Hypervisor detected: KVM Jan 20 01:26:51.411580 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 01:26:51.411588 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 01:26:51.411597 kernel: kvm-clock: using sched offset of 77175951175 cycles Jan 20 01:26:51.411609 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:26:51.411619 kernel: tsc: Detected 2445.426 MHz processor Jan 20 01:26:51.411630 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 01:26:51.411640 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 01:26:51.411656 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 01:26:51.411667 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 01:26:51.411677 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 01:26:51.411687 kernel: Using GB pages for direct mapping Jan 20 01:26:51.411697 kernel: ACPI: Early table checksum verification disabled Jan 20 01:26:51.411707 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 01:26:51.411717 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:26:51.411727 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:26:51.411736 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:26:51.411751 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 01:26:51.411761 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:26:51.411772 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:26:51.411782 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:26:51.411791 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:26:51.411806 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 01:26:51.411820 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 01:26:51.411832 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 01:26:51.411844 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 01:26:51.411855 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 01:26:51.411864 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 01:26:51.411873 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 01:26:51.411919 kernel: No NUMA configuration found Jan 20 01:26:51.411932 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 01:26:51.411947 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 01:26:51.411958 kernel: Zone ranges: Jan 20 01:26:51.411968 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 01:26:51.411978 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 01:26:51.411990 kernel: Normal empty Jan 20 01:26:51.412000 kernel: Device empty Jan 20 01:26:51.412011 kernel: Movable zone start for each node Jan 20 01:26:51.412023 kernel: Early memory node ranges Jan 20 01:26:51.412032 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 01:26:51.412041 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 01:26:51.412056 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 01:26:51.412069 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:26:51.412079 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 01:26:51.412122 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 01:26:51.412136 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 01:26:51.412149 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 01:26:51.412158 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 01:26:51.412167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 01:26:51.422373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 01:26:51.422423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 01:26:51.422434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 01:26:51.422444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 01:26:51.422455 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 01:26:51.422465 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 01:26:51.422475 kernel: TSC deadline timer available Jan 20 01:26:51.422485 kernel: CPU topo: Max. logical packages: 1 Jan 20 01:26:51.422496 kernel: CPU topo: Max. logical dies: 1 Jan 20 01:26:51.422507 kernel: CPU topo: Max. dies per package: 1 Jan 20 01:26:51.422522 kernel: CPU topo: Max. threads per core: 1 Jan 20 01:26:51.422532 kernel: CPU topo: Num. cores per package: 4 Jan 20 01:26:51.422541 kernel: CPU topo: Num. threads per package: 4 Jan 20 01:26:51.422550 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 01:26:51.422561 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 01:26:51.422573 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 01:26:51.422584 kernel: kvm-guest: setup PV sched yield Jan 20 01:26:51.422594 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 01:26:51.422605 kernel: Booting paravirtualized kernel on KVM Jan 20 01:26:51.422621 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 01:26:51.422632 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 01:26:51.422642 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 01:26:51.422653 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 01:26:51.422664 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 01:26:51.422675 kernel: kvm-guest: PV spinlocks enabled Jan 20 01:26:51.422685 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 01:26:51.422699 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:26:51.422713 kernel: random: crng init done Jan 20 01:26:51.422724 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:26:51.422736 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:26:51.422746 kernel: Fallback order for Node 0: 0 Jan 20 01:26:51.422755 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 01:26:51.422764 kernel: Policy zone: DMA32 Jan 20 01:26:51.422773 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:26:51.422785 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 01:26:51.422796 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 01:26:51.422811 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 01:26:51.422822 kernel: Dynamic Preempt: voluntary Jan 20 01:26:51.422833 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:26:51.422845 kernel: rcu: RCU event tracing is enabled. Jan 20 01:26:51.422856 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 01:26:51.422867 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:26:51.422910 kernel: Rude variant of Tasks RCU enabled. Jan 20 01:26:51.422922 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:26:51.422932 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:26:51.422943 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 01:26:51.422959 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:26:51.422969 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:26:51.422978 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:26:51.422987 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 01:26:51.422999 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:26:51.423021 kernel: Console: colour VGA+ 80x25 Jan 20 01:26:51.423035 kernel: printk: legacy console [ttyS0] enabled Jan 20 01:26:51.423047 kernel: ACPI: Core revision 20240827 Jan 20 01:26:51.423058 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 01:26:51.423069 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 01:26:51.423080 kernel: x2apic enabled Jan 20 01:26:51.423095 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 01:26:51.423135 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 01:26:51.423147 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 01:26:51.423159 kernel: kvm-guest: setup PV IPIs Jan 20 01:26:51.423170 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 01:26:51.423186 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:26:51.441416 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 01:26:51.441464 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 01:26:51.441478 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 01:26:51.441492 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 01:26:51.441505 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 01:26:51.441516 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 01:26:51.441529 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 01:26:51.441542 kernel: Speculative Store Bypass: Vulnerable Jan 20 01:26:51.441565 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 01:26:51.441579 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 01:26:51.441592 kernel: active return thunk: srso_alias_return_thunk Jan 20 01:26:51.441604 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 01:26:51.441615 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 01:26:51.441626 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 01:26:51.441639 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 01:26:51.441652 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 01:26:51.441667 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 01:26:51.441679 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 01:26:51.441691 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 01:26:51.441703 kernel: Freeing SMP alternatives memory: 32K Jan 20 01:26:51.441715 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:26:51.441726 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 01:26:51.441738 kernel: landlock: Up and running. Jan 20 01:26:51.441750 kernel: SELinux: Initializing. Jan 20 01:26:51.441762 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:26:51.441777 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:26:51.441819 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 01:26:51.441831 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 01:26:51.441844 kernel: signal: max sigframe size: 1776 Jan 20 01:26:51.441856 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:26:51.441869 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:26:51.441881 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 01:26:51.441919 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 01:26:51.441931 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:26:51.441948 kernel: smpboot: x86: Booting SMP configuration: Jan 20 01:26:51.441961 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 01:26:51.441973 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 01:26:51.441985 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 01:26:51.441997 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145096K reserved, 0K cma-reserved) Jan 20 01:26:51.442008 kernel: devtmpfs: initialized Jan 20 01:26:51.442021 kernel: x86/mm: Memory block size: 128MB Jan 20 01:26:51.442033 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:26:51.442046 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 01:26:51.442063 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:26:51.442075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:26:51.442088 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:26:51.442100 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:26:51.442112 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 01:26:51.442125 kernel: audit: type=2000 audit(1768872384.355:1): state=initialized audit_enabled=0 res=1 Jan 20 01:26:51.442138 kernel: cpuidle: using governor menu Jan 20 01:26:51.442149 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:26:51.442162 kernel: dca service started, version 1.12.1 Jan 20 01:26:51.442179 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 01:26:51.442190 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 01:26:51.442304 kernel: PCI: Using configuration type 1 for base access Jan 20 01:26:51.442317 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 01:26:51.442329 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:26:51.442341 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:26:51.442353 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:26:51.442366 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:26:51.442378 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:26:51.442397 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:26:51.442411 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:26:51.442422 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:26:51.442432 kernel: ACPI: Interpreter enabled Jan 20 01:26:51.442441 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 01:26:51.442454 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 01:26:51.442466 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 01:26:51.442479 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 01:26:51.442489 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 01:26:51.442506 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 01:26:51.443139 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 01:26:51.453716 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 01:26:51.453909 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 01:26:51.453926 kernel: PCI host bridge to bus 0000:00 Jan 20 01:26:51.454147 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 01:26:51.459544 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 01:26:51.459724 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 01:26:51.459885 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 01:26:51.460049 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 01:26:51.466879 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 01:26:51.467082 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 01:26:51.467561 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 01:26:51.467786 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 01:26:51.467951 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 01:26:51.468113 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 01:26:51.479670 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 01:26:51.479930 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 01:26:51.480292 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 11718 usecs Jan 20 01:26:51.480588 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 01:26:51.480816 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 01:26:51.481021 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 01:26:51.481319 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 01:26:51.481580 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 01:26:51.481776 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 01:26:51.482003 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 01:26:51.500890 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 01:26:51.501365 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 01:26:51.501581 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 01:26:51.501780 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 01:26:51.502057 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 01:26:51.512556 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 01:26:51.512936 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 01:26:51.513149 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 01:26:51.522668 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 15625 usecs Jan 20 01:26:51.522997 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 01:26:51.528041 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 01:26:51.529571 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 01:26:51.530130 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 01:26:51.530590 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 01:26:51.530626 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 01:26:51.530638 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 01:26:51.530648 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 01:26:51.530658 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 01:26:51.530668 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 01:26:51.530678 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 01:26:51.530688 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 01:26:51.530698 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 01:26:51.530710 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 01:26:51.530728 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 01:26:51.530739 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 01:26:51.530750 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 01:26:51.530762 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 01:26:51.530773 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 01:26:51.530784 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 01:26:51.530796 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 01:26:51.530807 kernel: iommu: Default domain type: Translated Jan 20 01:26:51.530818 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 01:26:51.530833 kernel: PCI: Using ACPI for IRQ routing Jan 20 01:26:51.530844 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 01:26:51.530856 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 01:26:51.530867 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 01:26:51.531045 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 01:26:51.544733 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 01:26:51.549633 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 01:26:51.549672 kernel: vgaarb: loaded Jan 20 01:26:51.549697 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 01:26:51.549709 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 01:26:51.549720 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 01:26:51.549732 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:26:51.549745 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:26:51.549757 kernel: pnp: PnP ACPI init Jan 20 01:26:51.560570 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 01:26:51.560619 kernel: pnp: PnP ACPI: found 6 devices Jan 20 01:26:51.560648 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 01:26:51.560661 kernel: NET: Registered PF_INET protocol family Jan 20 01:26:51.560674 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:26:51.560686 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:26:51.560697 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:26:51.560709 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:26:51.560720 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:26:51.560732 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:26:51.560743 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:26:51.560760 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:26:51.560771 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:26:51.560782 kernel: NET: Registered PF_XDP protocol family Jan 20 01:26:51.560980 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 01:26:51.561146 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 01:26:51.561416 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 01:26:51.561582 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 01:26:51.561741 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 01:26:51.561962 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 01:26:51.561983 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:26:51.561996 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:26:51.562008 kernel: Initialise system trusted keyrings Jan 20 01:26:51.562021 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:26:51.562032 kernel: Key type asymmetric registered Jan 20 01:26:51.562044 kernel: Asymmetric key parser 'x509' registered Jan 20 01:26:51.562056 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:26:51.562068 kernel: io scheduler mq-deadline registered Jan 20 01:26:51.562085 kernel: io scheduler kyber registered Jan 20 01:26:51.562096 kernel: io scheduler bfq registered Jan 20 01:26:51.562108 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 01:26:51.562120 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 01:26:51.562130 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 01:26:51.562140 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 01:26:51.562149 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:26:51.562160 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 01:26:51.562172 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 01:26:51.562184 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 01:26:51.572107 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 01:26:51.572594 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 01:26:51.572624 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 01:26:51.584823 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 01:26:51.585055 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T01:26:46 UTC (1768872406) Jan 20 01:26:51.586321 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 01:26:51.586354 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 01:26:51.586378 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:26:51.586389 kernel: Segment Routing with IPv6 Jan 20 01:26:51.586399 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:26:51.586410 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:26:51.586420 kernel: Key type dns_resolver registered Jan 20 01:26:51.586431 kernel: IPI shorthand broadcast: enabled Jan 20 01:26:51.586467 kernel: sched_clock: Marking stable (16923072608, 4683671039)->(23003996949, -1397253302) Jan 20 01:26:51.586478 kernel: registered taskstats version 1 Jan 20 01:26:51.586489 kernel: Loading compiled-in X.509 certificates Jan 20 01:26:51.586504 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 01:26:51.586514 kernel: Demotion targets for Node 0: null Jan 20 01:26:51.586525 kernel: Key type .fscrypt registered Jan 20 01:26:51.586535 kernel: Key type fscrypt-provisioning registered Jan 20 01:26:51.586545 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:26:51.586556 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:26:51.586567 kernel: ima: No architecture policies found Jan 20 01:26:51.586577 kernel: clk: Disabling unused clocks Jan 20 01:26:51.586588 kernel: Warning: unable to open an initial console. Jan 20 01:26:51.586602 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 01:26:51.586612 kernel: Write protecting the kernel read-only data: 40960k Jan 20 01:26:51.586623 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 01:26:51.588170 kernel: Run /init as init process Jan 20 01:26:51.588245 kernel: with arguments: Jan 20 01:26:51.588292 kernel: /init Jan 20 01:26:51.588303 kernel: with environment: Jan 20 01:26:51.588313 kernel: HOME=/ Jan 20 01:26:51.588323 kernel: TERM=linux Jan 20 01:26:51.588344 systemd[1]: Successfully made /usr/ read-only. Jan 20 01:26:51.588361 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:26:51.588374 systemd[1]: Detected virtualization kvm. Jan 20 01:26:51.588385 systemd[1]: Detected architecture x86-64. Jan 20 01:26:51.588397 systemd[1]: Running in initrd. Jan 20 01:26:51.588409 systemd[1]: No hostname configured, using default hostname. Jan 20 01:26:51.588421 systemd[1]: Hostname set to . Jan 20 01:26:51.588437 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:26:51.588465 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:26:51.588479 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:26:51.588490 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:26:51.588505 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:26:51.588517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:26:51.588533 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:26:51.588547 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:26:51.588561 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:26:51.588573 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:26:51.588586 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:26:51.588598 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:26:51.588610 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:26:51.588627 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:26:51.588638 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:26:51.588649 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:26:51.588660 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:26:51.588672 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:26:51.588684 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:26:51.588697 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 01:26:51.588709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:26:51.588721 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:26:51.588738 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:26:51.588750 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:26:51.588763 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:26:51.588775 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:26:51.588787 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:26:51.588801 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 01:26:51.588813 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:26:51.588824 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:26:51.588838 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:26:51.588852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:26:51.588868 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:26:51.588884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:26:51.588943 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 01:26:51.588978 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:26:51.588991 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:26:51.589004 systemd-journald[203]: Journal started Jan 20 01:26:51.589035 systemd-journald[203]: Runtime Journal (/run/log/journal/d557ffa18d1246d38d2b56718ba81b97) is 6M, max 48.3M, 42.2M free. Jan 20 01:26:51.605999 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:26:51.638490 systemd-modules-load[204]: Inserted module 'overlay' Jan 20 01:26:52.767337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:26:52.947408 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:26:52.893906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:26:53.123895 kernel: Bridge firewalling registered Jan 20 01:26:53.046012 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:26:53.139192 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 20 01:26:53.150697 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:26:53.158455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:26:53.220863 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:26:53.255068 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:26:53.263503 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:26:53.309681 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:26:53.642984 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 01:26:53.666796 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:26:53.711946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:26:53.958488 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 01:26:53.759770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:26:53.858624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:26:54.377157 systemd-resolved[254]: Positive Trust Anchors: Jan 20 01:26:54.377894 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:26:54.377965 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:26:54.452670 systemd-resolved[254]: Defaulting to hostname 'linux'. Jan 20 01:26:54.487629 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:26:54.653157 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:26:55.605031 kernel: SCSI subsystem initialized Jan 20 01:26:55.649971 kernel: hrtimer: interrupt took 13735277 ns Jan 20 01:26:55.777446 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:26:55.920242 kernel: iscsi: registered transport (tcp) Jan 20 01:26:56.109448 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:26:56.109546 kernel: QLogic iSCSI HBA Driver Jan 20 01:26:56.336562 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:26:56.754006 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:26:56.803699 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:26:57.925936 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:26:57.975655 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:26:58.193948 kernel: raid6: avx2x4 gen() 8729 MB/s Jan 20 01:26:58.217422 kernel: raid6: avx2x2 gen() 9045 MB/s Jan 20 01:26:58.247380 kernel: raid6: avx2x1 gen() 5153 MB/s Jan 20 01:26:58.247488 kernel: raid6: using algorithm avx2x2 gen() 9045 MB/s Jan 20 01:26:58.275142 kernel: raid6: .... xor() 8945 MB/s, rmw enabled Jan 20 01:26:58.277817 kernel: raid6: using avx2x2 recovery algorithm Jan 20 01:26:58.384705 kernel: xor: automatically using best checksumming function avx Jan 20 01:27:01.306484 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:27:01.380416 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:27:01.461310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:27:01.875965 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jan 20 01:27:01.960515 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:27:02.060682 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:27:02.465826 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 20 01:27:02.862467 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:27:02.940679 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:27:05.047719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:27:05.118694 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:27:06.579605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:27:06.590598 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:27:06.703101 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:27:06.751812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:27:06.840809 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:27:06.949357 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 01:27:06.957414 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 01:27:07.047461 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 01:27:07.137583 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 01:27:07.137678 kernel: GPT:9289727 != 19775487 Jan 20 01:27:07.137697 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 01:27:07.137713 kernel: GPT:9289727 != 19775487 Jan 20 01:27:07.137730 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 01:27:07.137745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:27:08.725014 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:27:08.929815 kernel: libata version 3.00 loaded. Jan 20 01:27:09.664459 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 01:27:09.733632 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 01:27:09.944268 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 01:27:09.950134 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 01:27:09.980545 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 01:27:10.203467 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 01:27:10.203842 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 01:27:10.204076 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 01:27:10.248393 kernel: scsi host0: ahci Jan 20 01:27:10.248983 kernel: scsi host1: ahci Jan 20 01:27:10.164069 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:27:10.290268 kernel: scsi host2: ahci Jan 20 01:27:10.333190 kernel: scsi host3: ahci Jan 20 01:27:10.375680 kernel: scsi host4: ahci Jan 20 01:27:10.376306 kernel: scsi host5: ahci Jan 20 01:27:10.407682 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 lpm-pol 1 Jan 20 01:27:10.407788 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 lpm-pol 1 Jan 20 01:27:10.409120 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 01:27:10.547867 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 lpm-pol 1 Jan 20 01:27:10.547902 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 lpm-pol 1 Jan 20 01:27:10.547917 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 lpm-pol 1 Jan 20 01:27:10.547931 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 lpm-pol 1 Jan 20 01:27:10.525728 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 01:27:10.646451 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:27:10.855273 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 01:27:10.855373 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 01:27:10.855406 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 01:27:10.855485 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 01:27:10.862320 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:27:10.896339 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 01:27:10.896476 kernel: ata3.00: applying bridge limits Jan 20 01:27:10.928565 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 01:27:10.928646 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 01:27:10.928667 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:27:10.930496 disk-uuid[555]: Primary Header is updated. Jan 20 01:27:10.930496 disk-uuid[555]: Secondary Entries is updated. Jan 20 01:27:10.930496 disk-uuid[555]: Secondary Header is updated. Jan 20 01:27:11.021180 kernel: ata3.00: configured for UDMA/100 Jan 20 01:27:11.021283 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 01:27:11.021855 kernel: AES CTR mode by8 optimization enabled Jan 20 01:27:11.021885 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:27:11.618838 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 01:27:11.619379 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:27:11.696368 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 01:27:12.118989 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:27:12.180244 disk-uuid[556]: The operation has completed successfully. Jan 20 01:27:12.877985 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:27:12.885105 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:27:12.929889 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:27:13.077682 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:27:13.168121 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:27:13.187684 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:27:13.206829 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:27:13.370310 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:27:13.563635 sh[640]: Success Jan 20 01:27:13.708775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:27:13.784314 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:27:13.784402 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:27:13.795171 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 01:27:14.010482 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 01:27:14.340375 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:27:14.411664 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:27:14.559101 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:27:14.649901 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (659) Jan 20 01:27:14.665973 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 01:27:14.666078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:27:14.738942 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:27:14.739524 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 01:27:14.750878 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:27:14.761480 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:27:14.772626 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:27:14.786581 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:27:14.863417 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:27:15.188908 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (694) Jan 20 01:27:15.223273 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:27:15.223356 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:27:15.294062 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:27:15.294147 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:27:15.388792 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:27:15.418389 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:27:15.543736 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:27:17.835694 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:27:17.877810 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:27:17.924649 ignition[761]: Ignition 2.22.0 Jan 20 01:27:17.924696 ignition[761]: Stage: fetch-offline Jan 20 01:27:17.924761 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:27:17.924776 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:27:17.924997 ignition[761]: parsed url from cmdline: "" Jan 20 01:27:17.925006 ignition[761]: no config URL provided Jan 20 01:27:17.925015 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:27:17.925031 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:27:17.925299 ignition[761]: op(1): [started] loading QEMU firmware config module Jan 20 01:27:17.925308 ignition[761]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 01:27:18.016781 ignition[761]: op(1): [finished] loading QEMU firmware config module Jan 20 01:27:18.016822 ignition[761]: QEMU firmware config was not found. Ignoring... Jan 20 01:27:18.367162 systemd-networkd[833]: lo: Link UP Jan 20 01:27:18.367173 systemd-networkd[833]: lo: Gained carrier Jan 20 01:27:18.385865 systemd-networkd[833]: Enumeration completed Jan 20 01:27:18.388125 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:27:18.417300 systemd[1]: Reached target network.target - Network. Jan 20 01:27:18.557900 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:27:18.557951 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:27:18.656279 systemd-networkd[833]: eth0: Link UP Jan 20 01:27:18.682679 systemd-networkd[833]: eth0: Gained carrier Jan 20 01:27:18.682711 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:27:18.877574 systemd-networkd[833]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:27:19.120034 ignition[761]: parsing config with SHA512: d587a00be38c0053d9f78c78a09f4bc5ac894d7734e6632a691b90c2ed2732662df97922ee1f5b930ecf808be71d9d216f04f6bfc7466e93e460905ece45f721 Jan 20 01:27:19.352397 systemd-resolved[254]: Detected conflict on linux IN A 10.0.0.36 Jan 20 01:27:19.352451 systemd-resolved[254]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 20 01:27:19.516727 unknown[761]: fetched base config from "system" Jan 20 01:27:19.516746 unknown[761]: fetched user config from "qemu" Jan 20 01:27:19.809764 ignition[761]: fetch-offline: fetch-offline passed Jan 20 01:27:19.810180 ignition[761]: Ignition finished successfully Jan 20 01:27:19.985690 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:27:20.054901 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 01:27:20.181623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:27:20.185145 systemd-networkd[833]: eth0: Gained IPv6LL Jan 20 01:27:21.502770 ignition[841]: Ignition 2.22.0 Jan 20 01:27:21.502845 ignition[841]: Stage: kargs Jan 20 01:27:21.503102 ignition[841]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:27:21.503118 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:27:21.547336 ignition[841]: kargs: kargs passed Jan 20 01:27:21.588067 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:27:21.547471 ignition[841]: Ignition finished successfully Jan 20 01:27:21.756887 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:27:22.555807 ignition[849]: Ignition 2.22.0 Jan 20 01:27:22.558650 ignition[849]: Stage: disks Jan 20 01:27:22.568315 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:27:22.568395 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:27:22.597141 ignition[849]: disks: disks passed Jan 20 01:27:22.604026 ignition[849]: Ignition finished successfully Jan 20 01:27:22.701321 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:27:22.716973 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:27:22.859283 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:27:22.895895 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:27:22.940700 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:27:22.962989 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:27:23.010741 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:27:23.700287 systemd-fsck[860]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 01:27:23.788651 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:27:23.901832 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:27:26.022665 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 01:27:26.058069 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:27:26.135485 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:27:26.214902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:27:26.339823 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:27:26.439129 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Jan 20 01:27:26.354694 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 01:27:26.489829 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:27:26.489874 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:27:26.354796 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:27:26.354847 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:27:26.632103 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:27:26.632153 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:27:26.515021 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:27:26.589912 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:27:26.659873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:27:27.012707 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:27:27.074074 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:27:27.133943 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:27:27.202102 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:27:28.547869 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:27:28.586956 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:27:28.688051 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:27:28.761446 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:27:28.796929 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:27:29.160435 ignition[981]: INFO : Ignition 2.22.0 Jan 20 01:27:29.185688 ignition[981]: INFO : Stage: mount Jan 20 01:27:29.185688 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:27:29.185688 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:27:29.185688 ignition[981]: INFO : mount: mount passed Jan 20 01:27:29.185688 ignition[981]: INFO : Ignition finished successfully Jan 20 01:27:29.191408 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:27:29.243804 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:27:29.379686 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:27:29.510504 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:27:29.597999 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (993) Jan 20 01:27:29.697324 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 01:27:29.697833 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:27:29.754013 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:27:29.754113 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:27:29.767027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:27:30.121329 ignition[1012]: INFO : Ignition 2.22.0 Jan 20 01:27:30.142475 ignition[1012]: INFO : Stage: files Jan 20 01:27:30.142475 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:27:30.142475 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:27:30.232920 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:27:30.232920 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:27:30.232920 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:27:30.232920 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:27:30.232920 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:27:30.232920 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:27:30.207769 unknown[1012]: wrote ssh authorized keys file for user: core Jan 20 01:27:30.602494 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:27:30.602494 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 01:27:30.954685 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:27:32.300815 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:27:32.300815 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:27:32.450661 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 01:27:33.549132 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:27:47.709411 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:27:47.709411 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:27:47.864549 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:27:48.068294 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:27:48.068294 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:27:48.068294 ignition[1012]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 01:27:48.068294 ignition[1012]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:27:48.068294 ignition[1012]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:27:48.068294 ignition[1012]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 01:27:48.572660 ignition[1012]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 01:27:51.109635 ignition[1012]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:27:51.486573 ignition[1012]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:27:51.486573 ignition[1012]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 01:27:51.486573 ignition[1012]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:27:51.486573 ignition[1012]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:27:51.847395 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:27:51.847395 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:27:51.847395 ignition[1012]: INFO : files: files passed Jan 20 01:27:51.847395 ignition[1012]: INFO : Ignition finished successfully Jan 20 01:27:51.506745 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:27:51.670595 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:27:51.796567 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:27:52.586502 initrd-setup-root-after-ignition[1040]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 01:27:52.584257 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:27:52.788874 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:27:52.788874 initrd-setup-root-after-ignition[1043]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:27:52.584460 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:27:53.083896 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:27:53.106970 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:27:53.352262 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:27:53.871377 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:27:56.119614 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:27:56.132573 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:27:56.471661 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:27:56.764747 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:27:56.924628 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:27:57.289519 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:27:58.084940 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:27:58.145713 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:27:58.475302 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:27:58.558671 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:27:58.581656 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:27:58.606657 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:27:58.607046 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:27:58.642484 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:27:58.642655 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:27:58.642794 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:27:58.643004 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:27:58.643143 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:27:58.643347 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:27:58.643486 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:27:58.643607 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:27:58.643869 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:27:58.644026 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:27:58.644149 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:27:58.644339 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:27:58.644588 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:27:59.500671 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:27:59.530552 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:27:59.617049 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:27:59.627406 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:27:59.719877 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:27:59.720114 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:27:59.787132 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:27:59.790752 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:27:59.823804 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:27:59.877073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:27:59.890495 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:27:59.987403 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:28:00.002648 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:28:00.016540 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:28:00.016800 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:28:00.053461 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:28:00.053643 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:28:00.093555 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:28:00.093765 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:28:00.094041 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:28:00.094299 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:28:00.109513 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:28:00.114068 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:28:00.114375 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:28:00.128466 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:28:00.128587 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:28:00.128785 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:28:00.142705 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:28:00.886181 ignition[1067]: INFO : Ignition 2.22.0 Jan 20 01:28:00.886181 ignition[1067]: INFO : Stage: umount Jan 20 01:28:00.886181 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:28:00.886181 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:28:00.886181 ignition[1067]: INFO : umount: umount passed Jan 20 01:28:00.886181 ignition[1067]: INFO : Ignition finished successfully Jan 20 01:28:00.142983 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:28:00.223094 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:28:00.409660 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:28:00.720549 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:28:00.821519 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:28:00.821745 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:28:00.892004 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:28:00.895692 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:28:00.963332 systemd[1]: Stopped target network.target - Network. Jan 20 01:28:00.990118 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:28:00.990409 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:28:01.014357 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:28:01.014491 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:28:01.082127 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:28:01.082343 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:28:01.564946 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:28:01.565315 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:28:01.700475 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:28:01.712632 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:28:01.849544 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:28:01.880452 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:28:02.007637 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:28:02.007986 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:28:02.094663 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 01:28:02.098007 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:28:02.100107 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:28:02.235441 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 01:28:02.245414 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 01:28:02.265412 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:28:02.267113 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:28:02.314350 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:28:02.340931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:28:02.341078 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:28:02.482386 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:28:02.482546 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:28:02.486713 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:28:02.486809 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:28:02.524526 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:28:02.524640 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:28:02.651448 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:28:02.736700 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 01:28:02.736910 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:28:02.909512 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:28:02.921402 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:28:02.965033 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:28:02.965129 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:28:03.001700 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:28:03.001773 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:28:03.014828 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:28:03.015015 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:28:03.053791 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:28:03.053974 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:28:03.071135 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:28:03.071332 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:28:03.122703 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:28:03.294441 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 01:28:03.294808 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:28:03.348647 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:28:03.348742 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:28:03.401836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:28:03.402022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:28:03.598596 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 01:28:03.598758 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 01:28:03.598837 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:28:03.615571 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:28:03.615786 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:28:03.935260 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:28:03.937717 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:28:04.047171 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:28:04.098311 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:28:04.287348 systemd[1]: Switching root. Jan 20 01:28:04.506659 systemd-journald[203]: Journal stopped Jan 20 01:28:17.620894 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 01:28:17.621063 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:28:17.621090 kernel: SELinux: policy capability open_perms=1 Jan 20 01:28:17.621109 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:28:17.621133 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:28:17.621151 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:28:17.621176 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:28:17.621191 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:28:17.621285 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:28:17.621307 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 01:28:17.621322 kernel: audit: type=1403 audit(1768872485.923:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:28:17.621350 systemd[1]: Successfully loaded SELinux policy in 482.815ms. Jan 20 01:28:17.621384 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 90.322ms. Jan 20 01:28:17.621401 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:28:17.621423 systemd[1]: Detected virtualization kvm. Jan 20 01:28:17.621445 systemd[1]: Detected architecture x86-64. Jan 20 01:28:17.621463 systemd[1]: Detected first boot. Jan 20 01:28:17.621558 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:28:17.621580 zram_generator::config[1115]: No configuration found. Jan 20 01:28:17.621602 kernel: Guest personality initialized and is inactive Jan 20 01:28:17.621624 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 01:28:17.621640 kernel: Initialized host personality Jan 20 01:28:17.621656 kernel: NET: Registered PF_VSOCK protocol family Jan 20 01:28:17.621672 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:28:17.621696 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 01:28:17.621712 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:28:17.621737 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:28:17.621756 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:28:17.621777 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:28:17.621796 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:28:17.621816 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:28:17.621836 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:28:17.621862 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:28:17.621883 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:28:17.621901 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:28:17.621920 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:28:17.621939 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:28:17.621958 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:28:17.626191 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:28:17.626271 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:28:17.626293 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:28:17.626319 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:28:17.626342 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 01:28:17.626357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:28:17.626377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:28:17.626394 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:28:17.626414 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:28:17.626430 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:28:17.626449 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:28:17.626470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:28:17.626490 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:28:17.626507 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:28:17.626566 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:28:17.626585 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:28:17.626602 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:28:17.626620 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 01:28:17.626637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:28:17.626655 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:28:17.626677 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:28:17.626696 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:28:17.626763 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:28:17.626783 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:28:17.626801 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:28:17.626820 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:28:17.626837 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:28:17.626855 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:28:17.626872 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:28:17.626895 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:28:17.626914 systemd[1]: Reached target machines.target - Containers. Jan 20 01:28:17.626933 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:28:17.626953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:28:17.631079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:28:17.631114 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:28:17.631133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:28:17.631148 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:28:17.631172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:28:17.631188 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:28:17.631459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:28:17.631481 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:28:17.631495 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:28:17.631510 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:28:17.635688 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:28:17.635710 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:28:17.635739 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:28:17.635760 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:28:17.635777 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:28:17.635793 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:28:17.635807 kernel: fuse: init (API version 7.41) Jan 20 01:28:17.635823 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:28:17.635837 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 01:28:17.635853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:28:17.635873 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:28:17.635889 systemd[1]: Stopped verity-setup.service. Jan 20 01:28:17.635903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:28:17.636845 systemd-journald[1200]: Collecting audit messages is disabled. Jan 20 01:28:17.636893 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:28:17.636927 systemd-journald[1200]: Journal started Jan 20 01:28:17.636959 systemd-journald[1200]: Runtime Journal (/run/log/journal/d557ffa18d1246d38d2b56718ba81b97) is 6M, max 48.3M, 42.2M free. Jan 20 01:28:12.393759 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:28:12.458447 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 01:28:12.463975 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:28:12.465807 systemd[1]: systemd-journald.service: Consumed 3.397s CPU time. Jan 20 01:28:17.693422 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:28:17.736881 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:28:17.839052 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:28:17.880499 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:28:17.918024 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:28:17.965294 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:28:17.994350 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:28:18.052424 kernel: ACPI: bus type drm_connector registered Jan 20 01:28:18.063005 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:28:18.096757 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:28:18.097344 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:28:18.144477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:28:18.182799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:28:18.280907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:28:18.292810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:28:18.343479 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:28:18.343928 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:28:18.358235 kernel: loop: module loaded Jan 20 01:28:18.395387 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:28:18.395776 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:28:18.472607 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:28:18.481151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:28:18.521060 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:28:18.644018 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:28:18.680080 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:28:18.720604 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 01:28:18.765154 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:28:18.799102 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:28:18.845264 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:28:18.884955 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:28:18.911324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:28:18.911449 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:28:18.948021 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 01:28:18.986295 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:28:19.012907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:28:19.072890 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:28:19.101099 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:28:19.124733 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:28:19.152749 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:28:19.176894 systemd-journald[1200]: Time spent on flushing to /var/log/journal/d557ffa18d1246d38d2b56718ba81b97 is 359.498ms for 977 entries. Jan 20 01:28:19.176894 systemd-journald[1200]: System Journal (/var/log/journal/d557ffa18d1246d38d2b56718ba81b97) is 8M, max 195.6M, 187.6M free. Jan 20 01:28:19.735677 systemd-journald[1200]: Received client request to flush runtime journal. Jan 20 01:28:19.198477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:28:19.211405 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:28:19.303653 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:28:19.368575 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:28:19.507847 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:28:19.576936 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:28:19.758940 kernel: loop0: detected capacity change from 0 to 110984 Jan 20 01:28:19.616455 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:28:19.707706 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:28:19.747089 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 01:28:19.845921 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:28:19.872902 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:28:20.189950 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:28:20.248740 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 01:28:20.383515 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:28:20.386370 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:28:20.481294 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:28:20.583898 kernel: loop1: detected capacity change from 0 to 128560 Jan 20 01:28:20.725592 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 20 01:28:20.725640 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 20 01:28:20.807721 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:28:21.026605 kernel: loop2: detected capacity change from 0 to 219144 Jan 20 01:28:22.248479 kernel: loop3: detected capacity change from 0 to 110984 Jan 20 01:28:22.502410 kernel: loop4: detected capacity change from 0 to 128560 Jan 20 01:28:22.637341 kernel: loop5: detected capacity change from 0 to 219144 Jan 20 01:28:23.142368 (sd-merge)[1259]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 01:28:23.143806 (sd-merge)[1259]: Merged extensions into '/usr'. Jan 20 01:28:23.571521 systemd[1]: Reload requested from client PID 1235 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:28:23.571670 systemd[1]: Reloading... Jan 20 01:28:24.329638 zram_generator::config[1284]: No configuration found. Jan 20 01:28:27.847911 systemd[1]: Reloading finished in 4275 ms. Jan 20 01:28:27.977883 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:28:28.020255 ldconfig[1230]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:28:28.037649 systemd[1]: Starting ensure-sysext.service... Jan 20 01:28:28.057426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:28:28.164268 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:28:28.193370 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:28:28.239680 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:28:28.239805 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:28:28.244698 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:28:28.246577 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:28:28.253531 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:28:28.257591 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jan 20 01:28:28.264437 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jan 20 01:28:28.264994 systemd[1]: Reload requested from client PID 1324 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:28:28.271907 systemd[1]: Reloading... Jan 20 01:28:28.340632 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:28:28.340673 systemd-tmpfiles[1325]: Skipping /boot Jan 20 01:28:29.007628 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:28:29.007650 systemd-tmpfiles[1325]: Skipping /boot Jan 20 01:28:29.317276 zram_generator::config[1353]: No configuration found. Jan 20 01:28:33.280651 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1218066411 wd_nsec: 1218065727 Jan 20 01:28:33.777395 systemd[1]: Reloading finished in 5502 ms. Jan 20 01:28:34.248608 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:28:34.596551 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:28:34.720495 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:28:35.281857 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:28:35.500606 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:28:35.596072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:28:35.759414 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:28:35.877529 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:28:35.955463 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:28:35.972397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:28:37.022615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:28:37.954526 augenrules[1418]: No rules Jan 20 01:28:38.560447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:28:38.603694 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:28:38.618797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:28:38.623727 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:28:38.624338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:28:38.655672 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:28:38.656378 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:28:38.704289 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:28:39.272399 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:28:39.388575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:28:39.389039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:28:39.407709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:28:39.470673 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:28:39.584082 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:28:39.591613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:28:39.723554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:28:39.724093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:28:39.734973 systemd-udevd[1401]: Using default interface naming scheme 'v255'. Jan 20 01:28:39.759648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:28:39.802479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:28:39.951465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:28:39.977070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:28:39.977629 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:28:40.003773 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:28:40.057132 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:28:40.063415 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:28:40.098058 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:28:40.142629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:28:40.146589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:28:40.165807 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:28:40.169587 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:28:40.273698 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:28:40.274384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:28:40.282975 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:28:40.323902 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:28:40.380918 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:28:40.474122 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:28:40.476688 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:28:40.514012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:28:40.558793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:28:40.710748 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:28:40.877649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:28:40.991128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:28:41.021871 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:28:41.022109 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:28:41.086800 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:28:41.117823 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:28:41.118077 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:28:41.360882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:28:41.372648 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:28:41.395276 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:28:41.395653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:28:41.500103 systemd[1]: Finished ensure-sysext.service. Jan 20 01:28:41.625515 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:28:41.625949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:28:41.943001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:28:41.943536 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:28:41.988257 augenrules[1457]: /sbin/augenrules: No change Jan 20 01:28:42.184971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:28:42.193085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:28:42.196788 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 01:28:42.267875 augenrules[1500]: No rules Jan 20 01:28:42.287057 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:28:42.290516 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:28:42.518863 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 01:28:42.750255 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:28:42.807154 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:28:43.190855 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:28:43.883154 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:28:45.006330 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 01:28:45.334826 kernel: ACPI: button: Power Button [PWRF] Jan 20 01:28:48.498471 systemd-resolved[1396]: Positive Trust Anchors: Jan 20 01:28:48.498739 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:28:48.498783 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:28:48.975665 systemd-resolved[1396]: Defaulting to hostname 'linux'. Jan 20 01:28:49.001513 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:28:49.051701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:28:49.702613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:28:49.794531 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 01:28:49.808432 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:28:49.866137 systemd-networkd[1482]: lo: Link UP Jan 20 01:28:49.866172 systemd-networkd[1482]: lo: Gained carrier Jan 20 01:28:49.893008 systemd-networkd[1482]: Enumeration completed Jan 20 01:28:49.893620 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:28:49.903058 systemd[1]: Reached target network.target - Network. Jan 20 01:28:49.907828 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:28:49.907835 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:28:49.924108 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 01:28:49.927094 systemd-networkd[1482]: eth0: Link UP Jan 20 01:28:49.931762 systemd-networkd[1482]: eth0: Gained carrier Jan 20 01:28:49.931800 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:28:49.968601 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:28:50.329905 systemd-networkd[1482]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:28:50.331900 systemd-timesyncd[1499]: Network configuration changed, trying to establish connection. Jan 20 01:28:50.356872 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 01:28:50.388650 systemd-timesyncd[1499]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 01:28:50.388960 systemd-timesyncd[1499]: Initial clock synchronization to Tue 2026-01-20 01:28:50.590879 UTC. Jan 20 01:28:51.590979 systemd-networkd[1482]: eth0: Gained IPv6LL Jan 20 01:28:51.720797 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:28:52.019108 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:28:52.069317 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:28:52.149104 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:28:52.223001 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:28:52.245060 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:28:52.325365 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 01:28:52.388135 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:28:52.417841 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:28:52.443887 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:28:52.481779 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:28:52.481958 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:28:52.502730 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:28:52.539817 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:28:52.606785 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:28:52.652506 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 01:28:52.688576 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 01:28:52.740311 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 01:28:52.928631 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:28:52.953008 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 01:28:53.006082 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:28:53.059755 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:28:53.091502 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:28:53.107400 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:28:53.107459 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:28:53.120755 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:28:53.200634 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 01:28:53.337505 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:28:53.363874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:28:53.709427 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:28:53.758744 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:28:53.844555 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:28:54.169954 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 01:28:54.227175 jq[1553]: false Jan 20 01:28:54.276580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:28:54.381451 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:28:54.435647 extend-filesystems[1554]: Found /dev/vda6 Jan 20 01:28:54.470506 extend-filesystems[1554]: Found /dev/vda9 Jan 20 01:28:54.486321 extend-filesystems[1554]: Checking size of /dev/vda9 Jan 20 01:28:54.568792 extend-filesystems[1554]: Resized partition /dev/vda9 Jan 20 01:28:54.585185 extend-filesystems[1567]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 01:28:54.615431 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 01:28:54.670070 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:28:54.687493 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:28:54.705764 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:28:54.713370 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jan 20 01:28:54.716043 oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jan 20 01:28:54.888108 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:28:54.924323 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting users, quitting Jan 20 01:28:54.924323 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:28:54.924323 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing group entry cache Jan 20 01:28:54.921424 oslogin_cache_refresh[1555]: Failure getting users, quitting Jan 20 01:28:54.921459 oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:28:54.921548 oslogin_cache_refresh[1555]: Refreshing group entry cache Jan 20 01:28:54.994939 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting groups, quitting Jan 20 01:28:55.010109 oslogin_cache_refresh[1555]: Failure getting groups, quitting Jan 20 01:28:55.010584 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:28:55.010673 oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:28:55.075719 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:28:55.193759 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:28:55.360698 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:28:55.371984 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:28:55.404397 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:28:55.450274 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:28:55.478875 update_engine[1589]: I20260120 01:28:55.478277 1589 main.cc:92] Flatcar Update Engine starting Jan 20 01:28:55.497324 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 01:28:55.502164 jq[1590]: true Jan 20 01:28:55.517046 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:28:55.519611 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:28:55.537050 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 01:28:55.538359 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 01:28:55.587440 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:28:55.591699 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:28:55.614033 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:28:55.670488 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:28:55.673378 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 01:28:55.673378 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 01:28:55.673378 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 01:28:55.703677 extend-filesystems[1554]: Resized filesystem in /dev/vda9 Jan 20 01:28:55.718114 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:28:55.718602 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:28:55.790558 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:28:55.791156 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:28:55.931960 jq[1600]: true Jan 20 01:28:56.202309 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 01:28:56.206268 (ntainerd)[1601]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:28:56.217070 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 01:28:56.446351 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:28:56.482738 tar[1594]: linux-amd64/LICENSE Jan 20 01:28:56.482738 tar[1594]: linux-amd64/helm Jan 20 01:28:56.522440 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:28:56.529811 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:28:57.097548 systemd-logind[1586]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 01:28:57.097582 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:28:57.098041 systemd-logind[1586]: New seat seat0. Jan 20 01:28:57.110050 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:28:57.124389 bash[1639]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:28:57.133262 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:28:57.176414 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:28:57.177183 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:28:57.179151 dbus-daemon[1551]: [system] SELinux support is enabled Jan 20 01:28:57.215186 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:28:57.233150 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:28:57.257189 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:28:57.274395 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:28:57.278329 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:28:57.290877 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:28:57.290909 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:28:57.378588 dbus-daemon[1551]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 01:28:57.595075 update_engine[1589]: I20260120 01:28:57.594064 1589 update_check_scheduler.cc:74] Next update check in 2m17s Jan 20 01:28:57.598836 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:28:57.844750 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:28:57.935091 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:28:58.194110 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:28:58.259659 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 01:28:58.539640 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:28:58.703578 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:29:01.821007 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:29:01.864797 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:41140.service - OpenSSH per-connection server daemon (10.0.0.1:41140). Jan 20 01:29:03.930743 containerd[1601]: time="2026-01-20T01:29:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 01:29:03.957132 containerd[1601]: time="2026-01-20T01:29:03.957012770Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 01:29:04.850356 containerd[1601]: time="2026-01-20T01:29:04.845027393Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="115.861µs" Jan 20 01:29:04.853764 containerd[1601]: time="2026-01-20T01:29:04.850874443Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.856412005Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.856778533Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.856809075Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.856862914Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.857116683Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.857135057Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.865896563Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.865976445Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.865999007Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.866010509Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.870181069Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 01:29:04.880107 containerd[1601]: time="2026-01-20T01:29:04.875552735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:29:05.011685 containerd[1601]: time="2026-01-20T01:29:04.875610137Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:29:05.011685 containerd[1601]: time="2026-01-20T01:29:04.875626097Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 01:29:05.011685 containerd[1601]: time="2026-01-20T01:29:04.875726932Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 01:29:05.011685 containerd[1601]: time="2026-01-20T01:29:04.885890234Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 01:29:05.011685 containerd[1601]: time="2026-01-20T01:29:04.886124972Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:29:05.029512 containerd[1601]: time="2026-01-20T01:29:05.016786568Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 01:29:05.045758 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 41140 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.048280407Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.048612183Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.048801270Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049007716Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049089546Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049316812Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049347639Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049411426Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049429076Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049444315Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049461715Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049949156Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.049983804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 01:29:05.056563 containerd[1601]: time="2026-01-20T01:29:05.050010940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050031096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050048445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050065855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050084051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050269808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050297296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050367106Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050442148Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050661558Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050687779Z" level=info msg="Start snapshots syncer" Jan 20 01:29:05.056987 containerd[1601]: time="2026-01-20T01:29:05.050732134Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 01:29:05.065257 containerd[1601]: time="2026-01-20T01:29:05.063108876Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 01:29:05.065257 containerd[1601]: time="2026-01-20T01:29:05.063412681Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.063925549Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064168464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064289046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064307543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064323586Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064341266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064354523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064368815Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064514703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064544594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064566893Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064605816Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064630730Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:29:05.065898 containerd[1601]: time="2026-01-20T01:29:05.064645957Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:29:05.066429 containerd[1601]: time="2026-01-20T01:29:05.064660994Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:29:05.066429 containerd[1601]: time="2026-01-20T01:29:05.064672902Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 01:29:05.066429 containerd[1601]: time="2026-01-20T01:29:05.064840485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 01:29:05.066429 containerd[1601]: time="2026-01-20T01:29:05.064875134Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 01:29:05.066429 containerd[1601]: time="2026-01-20T01:29:05.064965584Z" level=info msg="runtime interface created" Jan 20 01:29:05.066429 containerd[1601]: time="2026-01-20T01:29:05.064979574Z" level=info msg="created NRI interface" Jan 20 01:29:05.066429 containerd[1601]: time="2026-01-20T01:29:05.065042898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 01:29:05.066429 containerd[1601]: time="2026-01-20T01:29:05.065069330Z" level=info msg="Connect containerd service" Jan 20 01:29:05.066429 containerd[1601]: time="2026-01-20T01:29:05.065098970Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:29:05.072027 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:05.092410 containerd[1601]: time="2026-01-20T01:29:05.092355305Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:29:05.508033 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:29:05.537784 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:29:06.069277 systemd-logind[1586]: New session 1 of user core. Jan 20 01:29:07.655350 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:29:08.129690 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:29:09.040831 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:29:09.529335 systemd-logind[1586]: New session c1 of user core. Jan 20 01:29:10.680378 containerd[1601]: time="2026-01-20T01:29:10.679302852Z" level=info msg="Start subscribing containerd event" Jan 20 01:29:10.690892 tar[1594]: linux-amd64/README.md Jan 20 01:29:10.709534 containerd[1601]: time="2026-01-20T01:29:10.701421056Z" level=info msg="Start recovering state" Jan 20 01:29:10.709534 containerd[1601]: time="2026-01-20T01:29:10.706059781Z" level=info msg="Start event monitor" Jan 20 01:29:10.709534 containerd[1601]: time="2026-01-20T01:29:10.706273276Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:29:10.709534 containerd[1601]: time="2026-01-20T01:29:10.706328208Z" level=info msg="Start streaming server" Jan 20 01:29:10.709534 containerd[1601]: time="2026-01-20T01:29:10.706414420Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 01:29:10.709534 containerd[1601]: time="2026-01-20T01:29:10.706428997Z" level=info msg="runtime interface starting up..." Jan 20 01:29:10.709534 containerd[1601]: time="2026-01-20T01:29:10.706438504Z" level=info msg="starting plugins..." Jan 20 01:29:10.709534 containerd[1601]: time="2026-01-20T01:29:10.706535126Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 01:29:10.769344 containerd[1601]: time="2026-01-20T01:29:10.745630090Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:29:10.769344 containerd[1601]: time="2026-01-20T01:29:10.750933160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:29:10.769344 containerd[1601]: time="2026-01-20T01:29:10.751155738Z" level=info msg="containerd successfully booted in 7.087984s" Jan 20 01:29:10.759296 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:29:11.041723 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:29:11.818081 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 01:29:11.858479 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 01:29:12.989159 systemd[1681]: Queued start job for default target default.target. Jan 20 01:29:13.127350 systemd[1681]: Created slice app.slice - User Application Slice. Jan 20 01:29:13.127402 systemd[1681]: Reached target paths.target - Paths. Jan 20 01:29:13.135350 systemd[1681]: Reached target timers.target - Timers. Jan 20 01:29:13.155006 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:29:14.074872 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:29:14.078272 systemd[1681]: Reached target sockets.target - Sockets. Jan 20 01:29:14.078372 systemd[1681]: Reached target basic.target - Basic System. Jan 20 01:29:14.078457 systemd[1681]: Reached target default.target - Main User Target. Jan 20 01:29:14.078555 systemd[1681]: Startup finished in 4.168s. Jan 20 01:29:14.079483 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:29:14.171073 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:29:14.472595 kernel: kvm_amd: TSC scaling supported Jan 20 01:29:14.473098 kernel: kvm_amd: Nested Virtualization enabled Jan 20 01:29:14.473179 kernel: kvm_amd: Nested Paging enabled Jan 20 01:29:14.480633 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 01:29:14.484031 kernel: kvm_amd: PMU virtualization is disabled Jan 20 01:29:17.460032 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:49284.service - OpenSSH per-connection server daemon (10.0.0.1:49284). Jan 20 01:29:19.489782 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 49284 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:19.491380 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:19.598993 systemd-logind[1586]: New session 2 of user core. Jan 20 01:29:19.823839 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:29:21.101409 sshd[1702]: Connection closed by 10.0.0.1 port 49284 Jan 20 01:29:21.107071 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:21.664175 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:49284.service: Deactivated successfully. Jan 20 01:29:21.707409 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 01:29:21.744356 systemd-logind[1586]: Session 2 logged out. Waiting for processes to exit. Jan 20 01:29:21.768771 systemd-logind[1586]: Removed session 2. Jan 20 01:29:21.854430 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:49370.service - OpenSSH per-connection server daemon (10.0.0.1:49370). Jan 20 01:29:24.092895 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 49370 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:24.104463 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:24.247598 systemd-logind[1586]: New session 3 of user core. Jan 20 01:29:24.294264 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:29:24.821316 sshd[1711]: Connection closed by 10.0.0.1 port 49370 Jan 20 01:29:24.822957 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:24.890149 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:49370.service: Deactivated successfully. Jan 20 01:29:24.924863 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 01:29:25.557854 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Jan 20 01:29:25.643658 systemd-logind[1586]: Removed session 3. Jan 20 01:29:25.656585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:29:25.658103 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:29:25.664718 systemd[1]: Startup finished in 17.903s (kernel) + 1min 17.700s (initrd) + 1min 20.192s (userspace) = 2min 55.797s. Jan 20 01:29:25.708016 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:29:28.841834 kernel: EDAC MC: Ver: 3.0.0 Jan 20 01:29:33.335978 kubelet[1721]: E0120 01:29:33.335541 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:29:33.367866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:29:33.379344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:29:33.380378 systemd[1]: kubelet.service: Consumed 5.511s CPU time, 258.3M memory peak. Jan 20 01:29:34.945153 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:42842.service - OpenSSH per-connection server daemon (10.0.0.1:42842). Jan 20 01:29:35.364334 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 42842 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:35.379913 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:35.427680 systemd-logind[1586]: New session 4 of user core. Jan 20 01:29:35.601948 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:29:35.914162 sshd[1735]: Connection closed by 10.0.0.1 port 42842 Jan 20 01:29:35.913542 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:35.964155 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:42842.service: Deactivated successfully. Jan 20 01:29:35.971868 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:29:35.979929 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:29:35.998797 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:42858.service - OpenSSH per-connection server daemon (10.0.0.1:42858). Jan 20 01:29:36.001493 systemd-logind[1586]: Removed session 4. Jan 20 01:29:36.473282 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 42858 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:36.489147 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:36.552721 systemd-logind[1586]: New session 5 of user core. Jan 20 01:29:36.573009 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:29:37.126694 sshd[1744]: Connection closed by 10.0.0.1 port 42858 Jan 20 01:29:37.156536 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:37.193493 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:42858.service: Deactivated successfully. Jan 20 01:29:37.203791 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:29:37.216484 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:29:37.232188 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:42860.service - OpenSSH per-connection server daemon (10.0.0.1:42860). Jan 20 01:29:37.247640 systemd-logind[1586]: Removed session 5. Jan 20 01:29:37.769866 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 42860 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:37.797823 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:37.839567 systemd-logind[1586]: New session 6 of user core. Jan 20 01:29:37.872612 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:29:38.290955 sshd[1753]: Connection closed by 10.0.0.1 port 42860 Jan 20 01:29:38.287024 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 20 01:29:38.350066 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:42930.service - OpenSSH per-connection server daemon (10.0.0.1:42930). Jan 20 01:29:38.352050 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:42860.service: Deactivated successfully. Jan 20 01:29:38.363057 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:29:38.401718 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:29:38.409852 systemd-logind[1586]: Removed session 6. Jan 20 01:29:38.863483 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 42930 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:29:38.875089 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:29:38.915021 systemd-logind[1586]: New session 7 of user core. Jan 20 01:29:38.932913 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:29:39.179018 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:29:39.182094 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:29:42.924841 update_engine[1589]: I20260120 01:29:42.906608 1589 update_attempter.cc:509] Updating boot flags... Jan 20 01:29:43.394773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:29:43.507307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:29:45.874544 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:29:45.990142 (dockerd)[1804]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:29:48.836800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:29:48.897960 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:29:50.729348 kubelet[1816]: E0120 01:29:50.723104 1816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:29:50.763430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:29:50.774786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:29:50.775835 systemd[1]: kubelet.service: Consumed 1.526s CPU time, 110.5M memory peak. Jan 20 01:29:54.183415 dockerd[1804]: time="2026-01-20T01:29:54.176684575Z" level=info msg="Starting up" Jan 20 01:29:54.194075 dockerd[1804]: time="2026-01-20T01:29:54.185031786Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 01:29:55.625621 dockerd[1804]: time="2026-01-20T01:29:55.625022414Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 01:29:56.828313 dockerd[1804]: time="2026-01-20T01:29:56.822009779Z" level=info msg="Loading containers: start." Jan 20 01:29:56.987602 kernel: Initializing XFRM netlink socket Jan 20 01:30:01.866864 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:30:02.466728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:30:06.855007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:30:06.937975 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:30:07.427840 systemd-networkd[1482]: docker0: Link UP Jan 20 01:30:07.601942 dockerd[1804]: time="2026-01-20T01:30:07.596864610Z" level=info msg="Loading containers: done." Jan 20 01:30:07.787407 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1105685674-merged.mount: Deactivated successfully. Jan 20 01:30:07.819674 dockerd[1804]: time="2026-01-20T01:30:07.818600878Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:30:07.819674 dockerd[1804]: time="2026-01-20T01:30:07.818839002Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 01:30:07.819674 dockerd[1804]: time="2026-01-20T01:30:07.819036444Z" level=info msg="Initializing buildkit" Jan 20 01:30:07.821705 kubelet[1983]: E0120 01:30:07.821176 1983 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:30:07.841987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:30:07.852461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:30:07.853302 systemd[1]: kubelet.service: Consumed 1.044s CPU time, 110M memory peak. Jan 20 01:30:08.638578 dockerd[1804]: time="2026-01-20T01:30:08.606612904Z" level=info msg="Completed buildkit initialization" Jan 20 01:30:08.779797 dockerd[1804]: time="2026-01-20T01:30:08.778164945Z" level=info msg="Daemon has completed initialization" Jan 20 01:30:08.779797 dockerd[1804]: time="2026-01-20T01:30:08.778723379Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:30:08.782335 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:30:17.240659 containerd[1601]: time="2026-01-20T01:30:17.232526923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 01:30:18.000670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:30:18.012125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:30:19.853946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:30:20.121808 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:30:21.828842 kubelet[2067]: E0120 01:30:21.827759 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:30:21.886907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:30:21.887185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:30:21.887986 systemd[1]: kubelet.service: Consumed 1.335s CPU time, 113M memory peak. Jan 20 01:30:22.222914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661122984.mount: Deactivated successfully. Jan 20 01:30:32.090994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:30:32.323365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:30:35.095523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:30:35.720508 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:30:41.803630 kubelet[2141]: E0120 01:30:41.786668 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:30:41.826913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:30:41.833419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:30:41.837299 systemd[1]: kubelet.service: Consumed 1.609s CPU time, 111.8M memory peak. Jan 20 01:30:49.869756 containerd[1601]: time="2026-01-20T01:30:49.865688167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:30:49.879906 containerd[1601]: time="2026-01-20T01:30:49.878164814Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 20 01:30:49.893991 containerd[1601]: time="2026-01-20T01:30:49.893877662Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:30:49.987704 containerd[1601]: time="2026-01-20T01:30:49.969714043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:30:50.022734 containerd[1601]: time="2026-01-20T01:30:50.022479337Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 32.777082273s" Jan 20 01:30:50.041587 containerd[1601]: time="2026-01-20T01:30:50.033429328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 20 01:30:50.082558 containerd[1601]: time="2026-01-20T01:30:50.082448766Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 01:30:52.043535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 01:30:52.101120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:30:54.965572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:30:55.191855 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:30:56.909517 kubelet[2162]: E0120 01:30:56.901303 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:30:56.951919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:30:56.955830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:30:56.965160 systemd[1]: kubelet.service: Consumed 1.139s CPU time, 110.3M memory peak. Jan 20 01:31:06.453010 containerd[1601]: time="2026-01-20T01:31:06.448893416Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 20 01:31:06.453010 containerd[1601]: time="2026-01-20T01:31:06.450187840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:31:06.464916 containerd[1601]: time="2026-01-20T01:31:06.456429340Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:31:06.484117 containerd[1601]: time="2026-01-20T01:31:06.483639147Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 16.40069344s" Jan 20 01:31:06.484117 containerd[1601]: time="2026-01-20T01:31:06.483778207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 20 01:31:06.489918 containerd[1601]: time="2026-01-20T01:31:06.489740630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:31:06.514284 containerd[1601]: time="2026-01-20T01:31:06.513425871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 01:31:06.998054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 01:31:07.024003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:31:10.675778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:31:10.777113 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:31:11.760477 kubelet[2178]: E0120 01:31:11.759668 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:31:11.790190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:31:11.791279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:31:11.808787 systemd[1]: kubelet.service: Consumed 1.185s CPU time, 110.6M memory peak. Jan 20 01:31:15.007040 update_engine[1589]: I20260120 01:31:15.004748 1589 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 01:31:15.007040 update_engine[1589]: I20260120 01:31:15.005784 1589 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.007436 1589 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.009338 1589 omaha_request_params.cc:62] Current group set to stable Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.009913 1589 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.009932 1589 update_attempter.cc:643] Scheduling an action processor start. Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.009959 1589 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.010001 1589 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.010119 1589 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.010135 1589 omaha_request_action.cc:272] Request: Jan 20 01:31:15.023539 update_engine[1589]: Jan 20 01:31:15.023539 update_engine[1589]: Jan 20 01:31:15.023539 update_engine[1589]: Jan 20 01:31:15.023539 update_engine[1589]: Jan 20 01:31:15.023539 update_engine[1589]: Jan 20 01:31:15.023539 update_engine[1589]: Jan 20 01:31:15.023539 update_engine[1589]: Jan 20 01:31:15.023539 update_engine[1589]: Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.010146 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.016609 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:31:15.023539 update_engine[1589]: I20260120 01:31:15.017752 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:31:15.042839 locksmithd[1645]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 01:31:15.059072 update_engine[1589]: E20260120 01:31:15.058995 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:31:15.067979 update_engine[1589]: I20260120 01:31:15.067875 1589 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 01:31:19.503750 containerd[1601]: time="2026-01-20T01:31:19.492464107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:31:19.503750 containerd[1601]: time="2026-01-20T01:31:19.503190189Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 20 01:31:19.517482 containerd[1601]: time="2026-01-20T01:31:19.516463892Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:31:19.562926 containerd[1601]: time="2026-01-20T01:31:19.553013379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:31:19.562926 containerd[1601]: time="2026-01-20T01:31:19.559600507Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 13.045866901s" Jan 20 01:31:19.562926 containerd[1601]: time="2026-01-20T01:31:19.560989014Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 20 01:31:19.588824 containerd[1601]: time="2026-01-20T01:31:19.587383572Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 01:31:22.297064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 01:31:22.400770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:31:24.668865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:31:24.917842 update_engine[1589]: I20260120 01:31:24.916563 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:31:24.941382 update_engine[1589]: I20260120 01:31:24.919263 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:31:24.941382 update_engine[1589]: I20260120 01:31:24.928531 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:31:24.926443 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:31:25.040641 update_engine[1589]: E20260120 01:31:25.035494 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:31:25.044593 update_engine[1589]: I20260120 01:31:25.039862 1589 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 01:31:26.545418 kubelet[2202]: E0120 01:31:26.532539 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:31:26.900041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:31:27.000538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:31:27.631670 systemd[1]: kubelet.service: Consumed 913ms CPU time, 109.5M memory peak. Jan 20 01:31:35.006273 update_engine[1589]: I20260120 01:31:34.887837 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:31:37.401253 update_engine[1589]: I20260120 01:31:35.203999 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:31:37.401253 update_engine[1589]: I20260120 01:31:36.026077 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:31:46.458338 update_engine[1589]: E20260120 01:31:39.199301 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:31:46.458338 update_engine[1589]: I20260120 01:31:40.202114 1589 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 01:31:48.332499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 01:31:52.107903 update_engine[1589]: I20260120 01:31:51.005102 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:31:52.715982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:31:53.327992 update_engine[1589]: I20260120 01:31:52.446449 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:31:53.327992 update_engine[1589]: I20260120 01:31:53.023778 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:31:53.327992 update_engine[1589]: E20260120 01:31:53.309624 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:31:53.327992 update_engine[1589]: I20260120 01:31:53.309973 1589 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:31:53.327992 update_engine[1589]: I20260120 01:31:53.317260 1589 omaha_request_action.cc:617] Omaha request response: Jan 20 01:31:53.327992 update_engine[1589]: E20260120 01:31:53.317699 1589 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 01:31:53.338720 update_engine[1589]: I20260120 01:31:53.331496 1589 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 01:31:53.338720 update_engine[1589]: I20260120 01:31:53.331534 1589 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:31:53.338720 update_engine[1589]: I20260120 01:31:53.331544 1589 update_attempter.cc:306] Processing Done. Jan 20 01:31:53.342546 update_engine[1589]: E20260120 01:31:53.342492 1589 update_attempter.cc:619] Update failed. Jan 20 01:31:53.342705 update_engine[1589]: I20260120 01:31:53.342680 1589 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 01:31:53.342830 update_engine[1589]: I20260120 01:31:53.342805 1589 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 01:31:53.342894 update_engine[1589]: I20260120 01:31:53.342877 1589 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 01:31:53.343044 update_engine[1589]: I20260120 01:31:53.343022 1589 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:31:53.352763 update_engine[1589]: I20260120 01:31:53.352555 1589 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:31:53.352911 update_engine[1589]: I20260120 01:31:53.352885 1589 omaha_request_action.cc:272] Request: Jan 20 01:31:53.352911 update_engine[1589]: Jan 20 01:31:53.352911 update_engine[1589]: Jan 20 01:31:53.352911 update_engine[1589]: Jan 20 01:31:53.352911 update_engine[1589]: Jan 20 01:31:53.352911 update_engine[1589]: Jan 20 01:31:53.352911 update_engine[1589]: Jan 20 01:31:53.353565 update_engine[1589]: I20260120 01:31:53.353540 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:31:53.374427 update_engine[1589]: I20260120 01:31:53.374380 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:31:53.377820 update_engine[1589]: I20260120 01:31:53.377751 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:31:53.378537 locksmithd[1645]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 01:31:53.400028 update_engine[1589]: E20260120 01:31:53.399963 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 01:31:53.400783 update_engine[1589]: I20260120 01:31:53.400486 1589 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 01:31:53.400783 update_engine[1589]: I20260120 01:31:53.400522 1589 omaha_request_action.cc:617] Omaha request response: Jan 20 01:31:53.400783 update_engine[1589]: I20260120 01:31:53.400537 1589 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:31:53.400783 update_engine[1589]: I20260120 01:31:53.400547 1589 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 01:31:53.400783 update_engine[1589]: I20260120 01:31:53.400556 1589 update_attempter.cc:306] Processing Done. Jan 20 01:31:53.400783 update_engine[1589]: I20260120 01:31:53.400568 1589 update_attempter.cc:310] Error event sent. Jan 20 01:31:53.400783 update_engine[1589]: I20260120 01:31:53.400586 1589 update_check_scheduler.cc:74] Next update check in 41m30s Jan 20 01:31:53.413042 locksmithd[1645]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 01:31:56.948542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:31:57.043938 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:31:57.634304 kubelet[2219]: E0120 01:31:57.631987 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:31:57.656608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:31:57.656945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:31:57.657802 systemd[1]: kubelet.service: Consumed 931ms CPU time, 111M memory peak. Jan 20 01:31:57.828402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255306783.mount: Deactivated successfully. Jan 20 01:32:08.224077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 01:32:08.275038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:32:14.167667 containerd[1601]: time="2026-01-20T01:32:14.147357906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:32:14.194957 containerd[1601]: time="2026-01-20T01:32:14.177879960Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 20 01:32:14.320413 containerd[1601]: time="2026-01-20T01:32:14.241483557Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:32:14.619405 containerd[1601]: time="2026-01-20T01:32:14.618719922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:32:14.625874 containerd[1601]: time="2026-01-20T01:32:14.625363617Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 55.037845957s" Jan 20 01:32:14.625874 containerd[1601]: time="2026-01-20T01:32:14.625419465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 01:32:14.887070 containerd[1601]: time="2026-01-20T01:32:14.846023720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 01:32:15.745852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:32:15.794118 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:32:18.200915 kubelet[2240]: E0120 01:32:18.196440 2240 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:32:18.240753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:32:18.242868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:32:18.265096 systemd[1]: kubelet.service: Consumed 1.666s CPU time, 110.5M memory peak. Jan 20 01:32:26.999904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547545776.mount: Deactivated successfully. Jan 20 01:32:28.538334 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 01:32:28.604082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:32:38.840178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:32:38.909343 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:32:41.028240 kubelet[2270]: E0120 01:32:41.026460 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:32:41.038253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:32:41.038644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:32:41.044137 systemd[1]: kubelet.service: Consumed 2.578s CPU time, 110.6M memory peak. Jan 20 01:32:51.344783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 01:32:51.417566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:32:57.718550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:32:57.842756 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:32:58.244968 containerd[1601]: time="2026-01-20T01:32:58.242446709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:32:58.360556 containerd[1601]: time="2026-01-20T01:32:58.350471005Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 20 01:32:58.766001 containerd[1601]: time="2026-01-20T01:32:58.751495322Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:32:58.815892 containerd[1601]: time="2026-01-20T01:32:58.815821483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:32:58.842811 containerd[1601]: time="2026-01-20T01:32:58.831908484Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 43.930495716s" Jan 20 01:32:58.842811 containerd[1601]: time="2026-01-20T01:32:58.832033362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 20 01:32:58.908780 containerd[1601]: time="2026-01-20T01:32:58.904498604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 01:32:59.281931 kubelet[2326]: E0120 01:32:59.278370 2326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:32:59.545109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:32:59.561870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:32:59.580314 systemd[1]: kubelet.service: Consumed 1.806s CPU time, 110.2M memory peak. Jan 20 01:33:08.595580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396018390.mount: Deactivated successfully. Jan 20 01:33:08.898806 containerd[1601]: time="2026-01-20T01:33:08.883372525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:08.911771 containerd[1601]: time="2026-01-20T01:33:08.911572612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 20 01:33:08.918614 containerd[1601]: time="2026-01-20T01:33:08.918004431Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:08.943861 containerd[1601]: time="2026-01-20T01:33:08.943762659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:33:08.967057 containerd[1601]: time="2026-01-20T01:33:08.965336805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 10.06078329s" Jan 20 01:33:08.967057 containerd[1601]: time="2026-01-20T01:33:08.965399200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 20 01:33:09.019590 containerd[1601]: time="2026-01-20T01:33:09.016963962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 01:33:09.547253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 01:33:09.651189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:33:13.280017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1989130531.mount: Deactivated successfully. Jan 20 01:33:15.047168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:15.189723 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:33:21.437161 kubelet[2354]: E0120 01:33:21.432557 2354 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:33:21.580665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:33:21.581059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:33:21.592518 systemd[1]: kubelet.service: Consumed 2.213s CPU time, 110M memory peak. Jan 20 01:33:32.044804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 01:33:32.102721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:33:37.232096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:37.694185 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:33:39.376164 kubelet[2416]: E0120 01:33:39.375354 2416 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:33:39.422774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:33:39.428698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:33:39.436407 systemd[1]: kubelet.service: Consumed 1.646s CPU time, 110.5M memory peak. Jan 20 01:33:49.523758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 20 01:33:50.211386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:33:55.734328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:33:55.810732 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:33:58.396254 kubelet[2435]: E0120 01:33:58.391830 2435 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:33:58.408714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:33:58.411947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:33:58.416240 systemd[1]: kubelet.service: Consumed 1.679s CPU time, 112.3M memory peak. Jan 20 01:34:08.517030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Jan 20 01:34:08.563538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:34:18.894443 containerd[1601]: time="2026-01-20T01:34:18.877581520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:18.922833 containerd[1601]: time="2026-01-20T01:34:18.921018904Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 20 01:34:18.945330 containerd[1601]: time="2026-01-20T01:34:18.943392647Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:19.348289 containerd[1601]: time="2026-01-20T01:34:19.333936227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:19.338014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:34:19.374792 containerd[1601]: time="2026-01-20T01:34:19.371940784Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 1m10.354746388s" Jan 20 01:34:19.374792 containerd[1601]: time="2026-01-20T01:34:19.372017110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 20 01:34:19.412146 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:34:20.820369 kubelet[2456]: E0120 01:34:20.819030 2456 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:34:20.833144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:34:20.844874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:34:20.849339 systemd[1]: kubelet.service: Consumed 1.596s CPU time, 108.8M memory peak. Jan 20 01:34:30.999920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Jan 20 01:34:31.030655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:34:33.101816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:34:33.134035 (kubelet)[2495]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:34:34.822105 kubelet[2495]: E0120 01:34:34.817656 2495 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:34:34.838161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:34:34.850363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:34:34.872051 systemd[1]: kubelet.service: Consumed 940ms CPU time, 110.5M memory peak. Jan 20 01:34:45.026144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Jan 20 01:34:45.092672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:34:54.588357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:34:54.619979 (kubelet)[2513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:34:54.624624 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:34:54.630803 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:34:54.631345 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:34:54.631660 systemd[1]: kubelet.service: Consumed 1.468s CPU time, 98.5M memory peak. Jan 20 01:34:54.640323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:34:54.834672 systemd[1]: Reload requested from client PID 2522 ('systemctl') (unit session-7.scope)... Jan 20 01:34:54.834718 systemd[1]: Reloading... Jan 20 01:34:55.484465 zram_generator::config[2571]: No configuration found. Jan 20 01:34:57.071576 systemd[1]: Reloading finished in 2235 ms. Jan 20 01:34:57.350067 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:34:57.351545 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:34:57.352131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:34:57.373416 systemd[1]: kubelet.service: Consumed 319ms CPU time, 98.4M memory peak. Jan 20 01:34:57.395419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:34:58.909965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:34:58.983563 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:35:00.037481 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:35:00.037481 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:35:00.037481 kubelet[2615]: I0120 01:35:00.037416 2615 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:35:04.121522 kubelet[2615]: I0120 01:35:04.118500 2615 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:35:04.121522 kubelet[2615]: I0120 01:35:04.118587 2615 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:35:04.130346 kubelet[2615]: I0120 01:35:04.122161 2615 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:35:04.130346 kubelet[2615]: I0120 01:35:04.122310 2615 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:35:04.138260 kubelet[2615]: I0120 01:35:04.131132 2615 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:35:04.201974 kubelet[2615]: E0120 01:35:04.197032 2615 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:35:04.204074 kubelet[2615]: I0120 01:35:04.198537 2615 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:35:04.274553 kubelet[2615]: I0120 01:35:04.274510 2615 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:35:04.352301 kubelet[2615]: I0120 01:35:04.349178 2615 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:35:04.352301 kubelet[2615]: I0120 01:35:04.349863 2615 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:35:04.352301 kubelet[2615]: I0120 01:35:04.349910 2615 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:35:04.352301 kubelet[2615]: I0120 01:35:04.350336 2615 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:35:04.359191 kubelet[2615]: I0120 01:35:04.350353 2615 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:35:04.359191 kubelet[2615]: I0120 01:35:04.350582 2615 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:35:04.387132 kubelet[2615]: I0120 01:35:04.383894 2615 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:35:04.387132 kubelet[2615]: I0120 01:35:04.384478 2615 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:35:04.387132 kubelet[2615]: I0120 01:35:04.384505 2615 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:35:04.387132 kubelet[2615]: I0120 01:35:04.384645 2615 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:35:04.387132 kubelet[2615]: I0120 01:35:04.384737 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:35:04.399063 kubelet[2615]: E0120 01:35:04.392562 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:35:04.405051 kubelet[2615]: E0120 01:35:04.403058 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:35:04.445959 kubelet[2615]: I0120 01:35:04.444400 2615 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:35:04.476954 kubelet[2615]: I0120 01:35:04.453553 2615 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:35:04.476954 kubelet[2615]: I0120 01:35:04.453607 2615 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:35:04.476954 kubelet[2615]: W0120 01:35:04.465471 2615 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:35:04.516605 kubelet[2615]: I0120 01:35:04.509770 2615 server.go:1262] "Started kubelet" Jan 20 01:35:04.531278 kubelet[2615]: I0120 01:35:04.511931 2615 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:35:04.531278 kubelet[2615]: I0120 01:35:04.527074 2615 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:35:04.540089 kubelet[2615]: I0120 01:35:04.534585 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:35:04.540089 kubelet[2615]: I0120 01:35:04.536269 2615 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:35:04.540089 kubelet[2615]: I0120 01:35:04.522426 2615 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:35:04.560173 kubelet[2615]: I0120 01:35:04.549757 2615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:35:04.562925 kubelet[2615]: I0120 01:35:04.561038 2615 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:35:04.591086 kubelet[2615]: I0120 01:35:04.577464 2615 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:35:04.591086 kubelet[2615]: I0120 01:35:04.577675 2615 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:35:04.591086 kubelet[2615]: E0120 01:35:04.573458 2615 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4c88e60ef2ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,LastTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:35:04.591086 kubelet[2615]: I0120 01:35:04.580647 2615 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:35:04.591086 kubelet[2615]: E0120 01:35:04.581716 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:35:04.591086 kubelet[2615]: I0120 01:35:04.583759 2615 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:35:04.599268 kubelet[2615]: I0120 01:35:04.598542 2615 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:35:04.609987 kubelet[2615]: I0120 01:35:04.602605 2615 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:35:04.609987 kubelet[2615]: E0120 01:35:04.606113 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:35:04.609987 kubelet[2615]: E0120 01:35:04.606943 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Jan 20 01:35:04.670449 kubelet[2615]: E0120 01:35:04.670295 2615 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:35:04.676003 kubelet[2615]: I0120 01:35:04.675705 2615 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:35:04.694711 kubelet[2615]: E0120 01:35:04.694663 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:35:04.801094 kubelet[2615]: E0120 01:35:04.800091 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:35:04.808916 kubelet[2615]: E0120 01:35:04.808793 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Jan 20 01:35:04.902283 kubelet[2615]: E0120 01:35:04.901329 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:35:05.006079 kubelet[2615]: E0120 01:35:05.005805 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:35:05.022087 kubelet[2615]: I0120 01:35:05.020145 2615 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:35:05.022087 kubelet[2615]: I0120 01:35:05.020174 2615 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:35:05.022087 kubelet[2615]: I0120 01:35:05.020412 2615 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:35:05.050373 kubelet[2615]: I0120 01:35:05.039349 2615 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:35:05.050373 kubelet[2615]: I0120 01:35:05.039426 2615 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:35:05.050373 kubelet[2615]: I0120 01:35:05.039494 2615 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:35:05.050373 kubelet[2615]: E0120 01:35:05.039573 2615 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:35:05.050373 kubelet[2615]: E0120 01:35:05.040774 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:35:05.108325 kubelet[2615]: I0120 01:35:05.099312 2615 policy_none.go:49] "None policy: Start" Jan 20 01:35:05.108571 kubelet[2615]: I0120 01:35:05.108549 2615 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:35:05.110299 kubelet[2615]: E0120 01:35:05.108885 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:35:05.117191 kubelet[2615]: I0120 01:35:05.111612 2615 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:35:05.142711 kubelet[2615]: I0120 01:35:05.131372 2615 policy_none.go:47] "Start" Jan 20 01:35:05.142711 kubelet[2615]: E0120 01:35:05.141944 2615 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:35:05.192949 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:35:05.215649 kubelet[2615]: E0120 01:35:05.214898 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:35:05.215649 kubelet[2615]: E0120 01:35:05.215588 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Jan 20 01:35:05.247403 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:35:05.279152 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:35:05.311977 kubelet[2615]: E0120 01:35:05.308662 2615 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:35:05.311977 kubelet[2615]: I0120 01:35:05.309070 2615 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:35:05.311977 kubelet[2615]: I0120 01:35:05.309091 2615 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:35:05.316275 kubelet[2615]: I0120 01:35:05.313155 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:35:05.316275 kubelet[2615]: E0120 01:35:05.315335 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:35:05.638320 kubelet[2615]: I0120 01:35:05.617137 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e7d46329e3ccb894bd18634759b0844-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e7d46329e3ccb894bd18634759b0844\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:35:05.647707 kubelet[2615]: I0120 01:35:05.647659 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e7d46329e3ccb894bd18634759b0844-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e7d46329e3ccb894bd18634759b0844\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:35:05.649527 kubelet[2615]: I0120 01:35:05.649492 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e7d46329e3ccb894bd18634759b0844-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e7d46329e3ccb894bd18634759b0844\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:35:05.650470 kubelet[2615]: E0120 01:35:05.650365 2615 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:35:05.672832 kubelet[2615]: E0120 01:35:05.663843 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:35:05.680511 kubelet[2615]: E0120 01:35:05.680189 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:35:05.685263 kubelet[2615]: E0120 01:35:05.684494 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:35:05.695569 kubelet[2615]: I0120 01:35:05.695498 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:05.699697 kubelet[2615]: E0120 01:35:05.699652 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 20 01:35:05.738959 kubelet[2615]: E0120 01:35:05.737268 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:35:05.769500 kubelet[2615]: I0120 01:35:05.766031 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:35:05.769500 kubelet[2615]: I0120 01:35:05.766114 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:35:05.769500 kubelet[2615]: I0120 01:35:05.766151 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:35:05.781279 kubelet[2615]: I0120 01:35:05.780847 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:35:05.781279 kubelet[2615]: I0120 01:35:05.780952 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:35:05.818474 systemd[1]: Created slice kubepods-burstable-pod6e7d46329e3ccb894bd18634759b0844.slice - libcontainer container kubepods-burstable-pod6e7d46329e3ccb894bd18634759b0844.slice. Jan 20 01:35:05.879965 kubelet[2615]: E0120 01:35:05.879854 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:05.881403 kubelet[2615]: I0120 01:35:05.881343 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:35:05.930572 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 20 01:35:05.942108 kubelet[2615]: E0120 01:35:05.931155 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:05.944532 kubelet[2615]: I0120 01:35:05.934126 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:05.966505 kubelet[2615]: E0120 01:35:05.949171 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 20 01:35:05.966694 containerd[1601]: time="2026-01-20T01:35:05.956061781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e7d46329e3ccb894bd18634759b0844,Namespace:kube-system,Attempt:0,}" Jan 20 01:35:05.993265 kubelet[2615]: E0120 01:35:05.992556 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:06.012467 kubelet[2615]: E0120 01:35:06.012120 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:06.032034 kubelet[2615]: E0120 01:35:06.022497 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Jan 20 01:35:06.032272 containerd[1601]: time="2026-01-20T01:35:06.027402882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 20 01:35:06.048736 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 20 01:35:06.084792 kubelet[2615]: E0120 01:35:06.079841 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:06.105024 kubelet[2615]: E0120 01:35:06.102409 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:06.105305 containerd[1601]: time="2026-01-20T01:35:06.103159054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 20 01:35:07.124789 kubelet[2615]: E0120 01:35:07.116810 2615 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:35:07.144166 kubelet[2615]: E0120 01:35:07.122400 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:35:07.144166 kubelet[2615]: I0120 01:35:07.142433 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:07.144166 kubelet[2615]: E0120 01:35:07.142884 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 20 01:35:07.761716 kubelet[2615]: E0120 01:35:07.747628 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="3.2s" Jan 20 01:35:07.866879 kubelet[2615]: E0120 01:35:07.865691 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:35:08.300607 kubelet[2615]: I0120 01:35:08.120812 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:08.374532 kubelet[2615]: E0120 01:35:08.351857 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 20 01:35:08.536298 kubelet[2615]: E0120 01:35:08.534095 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:35:08.700804 kubelet[2615]: E0120 01:35:08.700454 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:35:08.841594 kubelet[2615]: E0120 01:35:08.838961 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:35:09.413909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350476636.mount: Deactivated successfully. Jan 20 01:35:09.642875 containerd[1601]: time="2026-01-20T01:35:09.639558161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:09.725681 containerd[1601]: time="2026-01-20T01:35:09.724007946Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 01:35:09.833930 containerd[1601]: time="2026-01-20T01:35:09.832047747Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:09.855917 containerd[1601]: time="2026-01-20T01:35:09.850910399Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:10.005688 containerd[1601]: time="2026-01-20T01:35:10.005594692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:35:10.021984 kubelet[2615]: I0120 01:35:10.021721 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:10.030540 kubelet[2615]: E0120 01:35:10.022833 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 20 01:35:10.053008 containerd[1601]: time="2026-01-20T01:35:10.045174826Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:10.110445 containerd[1601]: time="2026-01-20T01:35:10.104119606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:35:10.124114 containerd[1601]: time="2026-01-20T01:35:10.115628194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:10.132482 containerd[1601]: time="2026-01-20T01:35:10.130404060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.083832211s" Jan 20 01:35:10.146359 containerd[1601]: time="2026-01-20T01:35:10.145116995Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.146710818s" Jan 20 01:35:10.210446 containerd[1601]: time="2026-01-20T01:35:10.207033108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 4.089142009s" Jan 20 01:35:10.975874 containerd[1601]: time="2026-01-20T01:35:10.951474733Z" level=info msg="connecting to shim 373f3034efc7fb59ffd2e811cfc795250b74df27d657adf5b691027c6e2517b4" address="unix:///run/containerd/s/30823f3c40766631010e78da43c07ffba1044d0309b03939c2407f204babffd6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:35:11.003992 kubelet[2615]: E0120 01:35:10.991913 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="6.4s" Jan 20 01:35:11.023527 kubelet[2615]: E0120 01:35:11.004626 2615 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4c88e60ef2ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,LastTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:35:11.821174 kubelet[2615]: E0120 01:35:11.820847 2615 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:35:11.839649 containerd[1601]: time="2026-01-20T01:35:11.838630177Z" level=info msg="connecting to shim 41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7" address="unix:///run/containerd/s/9129f3959f5e9283d6a67cf2cc4ab3904fcb2a03c6dab64ed6b13386e87b76e3" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:35:11.926571 containerd[1601]: time="2026-01-20T01:35:11.925711837Z" level=info msg="connecting to shim d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387" address="unix:///run/containerd/s/6e91a995477f8793a4c1b3ef2e8a5c979bc5b41b26937d30bb37b0b4bc643b41" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:35:12.306295 kubelet[2615]: E0120 01:35:12.279171 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:35:12.306295 kubelet[2615]: E0120 01:35:12.301991 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:35:13.291271 systemd[1]: Started cri-containerd-373f3034efc7fb59ffd2e811cfc795250b74df27d657adf5b691027c6e2517b4.scope - libcontainer container 373f3034efc7fb59ffd2e811cfc795250b74df27d657adf5b691027c6e2517b4. Jan 20 01:35:13.325608 systemd[1]: Started cri-containerd-41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7.scope - libcontainer container 41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7. Jan 20 01:35:13.337155 kubelet[2615]: E0120 01:35:13.337047 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:35:13.423158 kubelet[2615]: I0120 01:35:13.422292 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:13.423158 kubelet[2615]: E0120 01:35:13.423046 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 20 01:35:14.787355 kubelet[2615]: E0120 01:35:14.780096 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:35:15.750616 kubelet[2615]: E0120 01:35:15.750367 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:35:15.816862 containerd[1601]: time="2026-01-20T01:35:15.798379974Z" level=error msg="get state for 373f3034efc7fb59ffd2e811cfc795250b74df27d657adf5b691027c6e2517b4" error="context deadline exceeded" Jan 20 01:35:15.816862 containerd[1601]: time="2026-01-20T01:35:15.798854462Z" level=warning msg="unknown status" status=0 Jan 20 01:35:15.988883 systemd[1]: Started cri-containerd-d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387.scope - libcontainer container d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387. Jan 20 01:35:17.651386 kubelet[2615]: E0120 01:35:17.624540 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="7s" Jan 20 01:35:17.735918 containerd[1601]: time="2026-01-20T01:35:17.735814521Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:35:19.174953 containerd[1601]: time="2026-01-20T01:35:19.174576718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7\"" Jan 20 01:35:19.220118 kubelet[2615]: E0120 01:35:19.220066 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:19.297630 containerd[1601]: time="2026-01-20T01:35:19.297531474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e7d46329e3ccb894bd18634759b0844,Namespace:kube-system,Attempt:0,} returns sandbox id \"373f3034efc7fb59ffd2e811cfc795250b74df27d657adf5b691027c6e2517b4\"" Jan 20 01:35:19.620319 kubelet[2615]: E0120 01:35:19.620050 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:19.708119 containerd[1601]: time="2026-01-20T01:35:19.695558779Z" level=info msg="CreateContainer within sandbox \"41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:35:19.734032 containerd[1601]: time="2026-01-20T01:35:19.733813477Z" level=info msg="CreateContainer within sandbox \"373f3034efc7fb59ffd2e811cfc795250b74df27d657adf5b691027c6e2517b4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:35:19.846786 kubelet[2615]: I0120 01:35:19.839356 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:19.846786 kubelet[2615]: E0120 01:35:19.840098 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 20 01:35:19.998029 containerd[1601]: time="2026-01-20T01:35:19.987458652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387\"" Jan 20 01:35:20.028453 kubelet[2615]: E0120 01:35:20.026729 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:20.183513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723588633.mount: Deactivated successfully. Jan 20 01:35:20.277502 kubelet[2615]: E0120 01:35:20.269888 2615 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:35:20.317566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644624931.mount: Deactivated successfully. Jan 20 01:35:20.318008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3230327306.mount: Deactivated successfully. Jan 20 01:35:20.343002 containerd[1601]: time="2026-01-20T01:35:20.336825434Z" level=info msg="CreateContainer within sandbox \"d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:35:20.361831 containerd[1601]: time="2026-01-20T01:35:20.346588174Z" level=info msg="Container b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:35:20.386754 containerd[1601]: time="2026-01-20T01:35:20.374687355Z" level=info msg="Container 1a5ec8c2fc44209b5d08955dac3c6fcf615f419f4bff92813ef63bf4d4d8ea50: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:35:21.126633 kubelet[2615]: E0120 01:35:21.108315 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:35:21.126633 kubelet[2615]: E0120 01:35:21.118868 2615 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4c88e60ef2ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,LastTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:35:21.289931 containerd[1601]: time="2026-01-20T01:35:21.281728805Z" level=info msg="Container 0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:35:22.376385 containerd[1601]: time="2026-01-20T01:35:22.344695525Z" level=info msg="CreateContainer within sandbox \"41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240\"" Jan 20 01:35:22.411864 containerd[1601]: time="2026-01-20T01:35:22.388535033Z" level=info msg="StartContainer for \"b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240\"" Jan 20 01:35:22.435136 containerd[1601]: time="2026-01-20T01:35:22.428586917Z" level=info msg="CreateContainer within sandbox \"373f3034efc7fb59ffd2e811cfc795250b74df27d657adf5b691027c6e2517b4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a5ec8c2fc44209b5d08955dac3c6fcf615f419f4bff92813ef63bf4d4d8ea50\"" Jan 20 01:35:22.440988 containerd[1601]: time="2026-01-20T01:35:22.439788248Z" level=info msg="connecting to shim b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240" address="unix:///run/containerd/s/9129f3959f5e9283d6a67cf2cc4ab3904fcb2a03c6dab64ed6b13386e87b76e3" protocol=ttrpc version=3 Jan 20 01:35:22.449468 containerd[1601]: time="2026-01-20T01:35:22.449418659Z" level=info msg="StartContainer for \"1a5ec8c2fc44209b5d08955dac3c6fcf615f419f4bff92813ef63bf4d4d8ea50\"" Jan 20 01:35:22.510925 containerd[1601]: time="2026-01-20T01:35:22.505399360Z" level=info msg="connecting to shim 1a5ec8c2fc44209b5d08955dac3c6fcf615f419f4bff92813ef63bf4d4d8ea50" address="unix:///run/containerd/s/30823f3c40766631010e78da43c07ffba1044d0309b03939c2407f204babffd6" protocol=ttrpc version=3 Jan 20 01:35:22.622995 containerd[1601]: time="2026-01-20T01:35:22.622395538Z" level=info msg="CreateContainer within sandbox \"d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d\"" Jan 20 01:35:22.761160 containerd[1601]: time="2026-01-20T01:35:22.750147054Z" level=info msg="StartContainer for \"0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d\"" Jan 20 01:35:22.826109 containerd[1601]: time="2026-01-20T01:35:22.825932591Z" level=info msg="connecting to shim 0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d" address="unix:///run/containerd/s/6e91a995477f8793a4c1b3ef2e8a5c979bc5b41b26937d30bb37b0b4bc643b41" protocol=ttrpc version=3 Jan 20 01:35:23.622440 kubelet[2615]: E0120 01:35:23.610724 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:35:23.665300 systemd[1]: Started cri-containerd-b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240.scope - libcontainer container b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240. Jan 20 01:35:23.906171 systemd[1]: Started cri-containerd-1a5ec8c2fc44209b5d08955dac3c6fcf615f419f4bff92813ef63bf4d4d8ea50.scope - libcontainer container 1a5ec8c2fc44209b5d08955dac3c6fcf615f419f4bff92813ef63bf4d4d8ea50. Jan 20 01:35:24.150980 systemd[1]: Started cri-containerd-0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d.scope - libcontainer container 0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d. Jan 20 01:35:24.653365 kubelet[2615]: E0120 01:35:24.653309 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="7s" Jan 20 01:35:25.806688 kubelet[2615]: E0120 01:35:25.795892 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:35:25.840919 kubelet[2615]: E0120 01:35:25.840866 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:35:25.895880 kubelet[2615]: E0120 01:35:25.891832 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:35:26.477286 containerd[1601]: time="2026-01-20T01:35:26.103868137Z" level=error msg="get state for b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240" error="context deadline exceeded" Jan 20 01:35:26.477286 containerd[1601]: time="2026-01-20T01:35:26.109119933Z" level=warning msg="unknown status" status=0 Jan 20 01:35:26.783175 containerd[1601]: time="2026-01-20T01:35:26.782724977Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:35:26.931261 kubelet[2615]: I0120 01:35:26.927806 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:26.945831 kubelet[2615]: E0120 01:35:26.945586 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jan 20 01:35:28.048761 containerd[1601]: time="2026-01-20T01:35:28.043100271Z" level=info msg="StartContainer for \"b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240\" returns successfully" Jan 20 01:35:29.449467 kubelet[2615]: E0120 01:35:29.446744 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:29.449467 kubelet[2615]: E0120 01:35:29.447124 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:29.679972 containerd[1601]: time="2026-01-20T01:35:29.679876637Z" level=info msg="StartContainer for \"0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d\" returns successfully" Jan 20 01:35:29.953608 containerd[1601]: time="2026-01-20T01:35:29.953295642Z" level=info msg="StartContainer for \"1a5ec8c2fc44209b5d08955dac3c6fcf615f419f4bff92813ef63bf4d4d8ea50\" returns successfully" Jan 20 01:35:30.888777 kubelet[2615]: E0120 01:35:30.888720 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:30.931757 kubelet[2615]: E0120 01:35:30.898137 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:30.957021 kubelet[2615]: E0120 01:35:30.944491 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:30.957021 kubelet[2615]: E0120 01:35:30.944670 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:30.975485 kubelet[2615]: E0120 01:35:30.974697 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:30.985158 kubelet[2615]: E0120 01:35:30.983985 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:31.229805 kubelet[2615]: E0120 01:35:31.215029 2615 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4c88e60ef2ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,LastTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:35:35.144060 kubelet[2615]: I0120 01:35:35.143782 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:35.246307 kubelet[2615]: E0120 01:35:35.242153 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:35.246307 kubelet[2615]: E0120 01:35:35.242672 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:35.287992 kubelet[2615]: E0120 01:35:35.287941 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:35.293731 kubelet[2615]: E0120 01:35:35.292491 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:35.294323 kubelet[2615]: E0120 01:35:35.294090 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:35.304125 kubelet[2615]: E0120 01:35:35.303787 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:40.333850 kubelet[2615]: E0120 01:35:40.308694 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:35:45.995919 kubelet[2615]: E0120 01:35:45.936803 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 01:35:46.316643 kubelet[2615]: E0120 01:35:46.194625 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:35:46.938787 kubelet[2615]: E0120 01:35:46.933633 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:46.946109 kubelet[2615]: E0120 01:35:46.945887 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:47.300744 kubelet[2615]: E0120 01:35:47.300526 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:47.323281 kubelet[2615]: E0120 01:35:47.321865 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:47.555739 kubelet[2615]: E0120 01:35:47.554068 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:47.555739 kubelet[2615]: E0120 01:35:47.554530 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:48.701873 kubelet[2615]: E0120 01:35:48.701643 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:48.748527 kubelet[2615]: E0120 01:35:48.717908 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:35:50.484411 kubelet[2615]: E0120 01:35:50.481368 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:35:50.505709 kubelet[2615]: E0120 01:35:50.490978 2615 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:35:50.505709 kubelet[2615]: E0120 01:35:50.491110 2615 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:35:53.294812 kubelet[2615]: I0120 01:35:53.275333 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:35:56.355295 kubelet[2615]: E0120 01:35:56.351749 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:35:56.355295 kubelet[2615]: E0120 01:35:56.351652 2615 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4c88e60ef2ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,LastTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:35:57.024088 kubelet[2615]: E0120 01:35:57.023050 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:35:57.040454 kubelet[2615]: E0120 01:35:57.032579 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:35:57.579097 kubelet[2615]: E0120 01:35:57.578767 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:35:57.603717 kubelet[2615]: E0120 01:35:57.603317 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:00.520373 kubelet[2615]: E0120 01:36:00.508758 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:36:02.297856 kubelet[2615]: E0120 01:36:02.296675 2615 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:36:02.982986 kubelet[2615]: E0120 01:36:02.979889 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 01:36:03.486144 kubelet[2615]: E0120 01:36:03.450150 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:36:10.640978 kubelet[2615]: E0120 01:36:10.552726 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:36:10.908639 kubelet[2615]: I0120 01:36:10.895624 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:36:16.926008 kubelet[2615]: E0120 01:36:16.912690 2615 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4c88e60ef2ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,LastTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:36:20.599632 kubelet[2615]: E0120 01:36:20.594961 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 01:36:20.653543 kubelet[2615]: E0120 01:36:20.653475 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:36:20.915844 kubelet[2615]: E0120 01:36:20.912139 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 01:36:28.071783 kubelet[2615]: I0120 01:36:28.068761 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:36:30.724389 kubelet[2615]: E0120 01:36:30.715559 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:36:32.734819 kubelet[2615]: E0120 01:36:32.725364 2615 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:36:36.554033 kubelet[2615]: E0120 01:36:36.478735 2615 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:36:36.586603 kubelet[2615]: E0120 01:36:36.574405 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:38.199879 kubelet[2615]: E0120 01:36:38.189422 2615 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 01:36:38.685332 kubelet[2615]: I0120 01:36:38.685140 2615 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:36:38.686901 kubelet[2615]: E0120 01:36:38.686499 2615 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:36:39.073047 kubelet[2615]: E0120 01:36:39.072823 2615 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4c88e60ef2ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,LastTimestamp:2026-01-20 01:35:04.508764927 +0000 UTC m=+5.421777718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:36:39.722705 kubelet[2615]: E0120 01:36:39.722346 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:40.134445 kubelet[2615]: E0120 01:36:40.133864 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:40.242787 kubelet[2615]: E0120 01:36:40.240432 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:40.491151 kubelet[2615]: E0120 01:36:40.430025 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:40.532391 kubelet[2615]: E0120 01:36:40.532189 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:40.660955 kubelet[2615]: E0120 01:36:40.639854 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:40.727353 kubelet[2615]: E0120 01:36:40.724910 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:36:40.777041 kubelet[2615]: E0120 01:36:40.773448 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:40.892185 kubelet[2615]: E0120 01:36:40.881680 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:41.180254 kubelet[2615]: E0120 01:36:41.105596 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:41.250545 kubelet[2615]: E0120 01:36:41.213638 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:41.330999 kubelet[2615]: E0120 01:36:41.330793 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:41.643790 kubelet[2615]: E0120 01:36:41.558719 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:41.719045 kubelet[2615]: E0120 01:36:41.718961 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:41.825437 kubelet[2615]: E0120 01:36:41.825371 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:41.933045 kubelet[2615]: E0120 01:36:41.928954 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:42.031014 kubelet[2615]: E0120 01:36:42.029287 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:42.179085 kubelet[2615]: E0120 01:36:42.133877 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:42.467085 kubelet[2615]: E0120 01:36:42.456499 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:42.565007 kubelet[2615]: E0120 01:36:42.564548 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:42.830798 kubelet[2615]: E0120 01:36:42.734474 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:43.012107 kubelet[2615]: E0120 01:36:43.006561 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:43.120679 kubelet[2615]: E0120 01:36:43.119522 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:43.275112 kubelet[2615]: E0120 01:36:43.251832 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:43.391766 kubelet[2615]: E0120 01:36:43.386747 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:43.690161 kubelet[2615]: E0120 01:36:43.640709 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:43.748948 kubelet[2615]: E0120 01:36:43.748707 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:43.851772 kubelet[2615]: E0120 01:36:43.851653 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:44.252441 kubelet[2615]: E0120 01:36:44.190445 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:44.328816 kubelet[2615]: E0120 01:36:44.326020 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:44.513767 kubelet[2615]: E0120 01:36:44.500740 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:44.613148 kubelet[2615]: E0120 01:36:44.612984 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:44.715126 kubelet[2615]: E0120 01:36:44.714509 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:44.825360 kubelet[2615]: E0120 01:36:44.814693 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:44.917328 kubelet[2615]: E0120 01:36:44.917162 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:45.135101 kubelet[2615]: E0120 01:36:45.095793 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:45.233669 kubelet[2615]: E0120 01:36:45.206519 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:45.599179 kubelet[2615]: E0120 01:36:45.598467 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:45.711423 kubelet[2615]: E0120 01:36:45.710390 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:45.831709 kubelet[2615]: E0120 01:36:45.831282 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:46.086149 kubelet[2615]: E0120 01:36:46.081143 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:46.191935 kubelet[2615]: E0120 01:36:46.186135 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:46.290834 kubelet[2615]: E0120 01:36:46.286718 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:46.387770 kubelet[2615]: E0120 01:36:46.387424 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:46.490929 kubelet[2615]: E0120 01:36:46.488104 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:46.598708 kubelet[2615]: E0120 01:36:46.598639 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:46.699806 kubelet[2615]: E0120 01:36:46.699485 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:46.857477 kubelet[2615]: E0120 01:36:46.854369 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:46.975386 kubelet[2615]: E0120 01:36:46.962354 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:47.077438 kubelet[2615]: E0120 01:36:47.077333 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:47.256749 kubelet[2615]: E0120 01:36:47.234583 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:47.370287 kubelet[2615]: E0120 01:36:47.359103 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:47.473784 kubelet[2615]: E0120 01:36:47.473701 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:47.583123 kubelet[2615]: E0120 01:36:47.582678 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:47.876713 kubelet[2615]: E0120 01:36:47.845892 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:47.950251 kubelet[2615]: E0120 01:36:47.947789 2615 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:36:47.993770 kubelet[2615]: I0120 01:36:47.989442 2615 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 01:36:48.495482 kubelet[2615]: I0120 01:36:48.492060 2615 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:36:48.495482 kubelet[2615]: I0120 01:36:48.492576 2615 apiserver.go:52] "Watching apiserver" Jan 20 01:36:48.543896 kubelet[2615]: E0120 01:36:48.543757 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:48.587340 kubelet[2615]: I0120 01:36:48.586702 2615 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 01:36:48.608147 kubelet[2615]: I0120 01:36:48.608036 2615 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 01:36:48.617499 kubelet[2615]: E0120 01:36:48.615597 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:48.742122 kubelet[2615]: E0120 01:36:48.741401 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:36:56.622315 kubelet[2615]: I0120 01:36:56.610838 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.610655601 podStartE2EDuration="8.610655601s" podCreationTimestamp="2026-01-20 01:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:36:56.53455737 +0000 UTC m=+117.447570140" watchObservedRunningTime="2026-01-20 01:36:56.610655601 +0000 UTC m=+117.523668381" Jan 20 01:36:57.303142 kubelet[2615]: I0120 01:36:57.298762 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=9.298664724 podStartE2EDuration="9.298664724s" podCreationTimestamp="2026-01-20 01:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:36:57.292104135 +0000 UTC m=+118.205116915" watchObservedRunningTime="2026-01-20 01:36:57.298664724 +0000 UTC m=+118.211677513" Jan 20 01:36:57.975591 kubelet[2615]: I0120 01:36:57.973301 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.954324975 podStartE2EDuration="9.954324975s" podCreationTimestamp="2026-01-20 01:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:36:57.938073749 +0000 UTC m=+118.851086549" watchObservedRunningTime="2026-01-20 01:36:57.954324975 +0000 UTC m=+118.867337745" Jan 20 01:37:04.752329 kubelet[2615]: E0120 01:37:04.718599 2615 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Jan 20 01:37:06.406711 kubelet[2615]: E0120 01:37:06.403565 2615 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:37:06.767324 kubelet[2615]: E0120 01:37:06.657190 2615 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.601s" Jan 20 01:37:11.413720 kubelet[2615]: E0120 01:37:11.413303 2615 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:37:15.260400 systemd[1]: Reload requested from client PID 2925 ('systemctl') (unit session-7.scope)... Jan 20 01:37:15.263466 systemd[1]: Reloading... Jan 20 01:37:16.766469 kubelet[2615]: E0120 01:37:16.744395 2615 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:37:16.920118 zram_generator::config[2964]: No configuration found. Jan 20 01:37:18.682782 kubelet[2615]: E0120 01:37:18.680482 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:18.884431 kubelet[2615]: E0120 01:37:18.868160 2615 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:22.619252 systemd[1]: Reloading finished in 7349 ms. Jan 20 01:37:22.685403 kubelet[2615]: E0120 01:37:22.653471 2615 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:37:22.685403 kubelet[2615]: E0120 01:37:22.654530 2615 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.591s" Jan 20 01:37:23.195161 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:37:23.423819 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:37:23.435888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:37:23.436053 systemd[1]: kubelet.service: Consumed 15.331s CPU time, 133.7M memory peak. Jan 20 01:37:23.498039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:37:27.775271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:37:27.873911 (kubelet)[3013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:37:29.532398 kubelet[3013]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:37:29.532398 kubelet[3013]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:37:29.532398 kubelet[3013]: I0120 01:37:29.531479 3013 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:37:29.953379 kubelet[3013]: I0120 01:37:29.934669 3013 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:37:29.953379 kubelet[3013]: I0120 01:37:29.934720 3013 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:37:29.953379 kubelet[3013]: I0120 01:37:29.935164 3013 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:37:29.953379 kubelet[3013]: I0120 01:37:29.935191 3013 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:37:29.953379 kubelet[3013]: I0120 01:37:29.936111 3013 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:37:30.043593 kubelet[3013]: I0120 01:37:30.028689 3013 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 01:37:30.076601 kubelet[3013]: I0120 01:37:30.062384 3013 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:37:30.550174 kubelet[3013]: I0120 01:37:30.544480 3013 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:37:30.701925 kubelet[3013]: I0120 01:37:30.688290 3013 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:37:30.701925 kubelet[3013]: I0120 01:37:30.688709 3013 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:37:30.701925 kubelet[3013]: I0120 01:37:30.688802 3013 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:37:30.701925 kubelet[3013]: I0120 01:37:30.696965 3013 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:37:30.719491 kubelet[3013]: I0120 01:37:30.696990 3013 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:37:30.719491 kubelet[3013]: I0120 01:37:30.697512 3013 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:37:30.733944 kubelet[3013]: I0120 01:37:30.733556 3013 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:37:30.738239 kubelet[3013]: I0120 01:37:30.734876 3013 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:37:30.738239 kubelet[3013]: I0120 01:37:30.734920 3013 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:37:30.738239 kubelet[3013]: I0120 01:37:30.734954 3013 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:37:30.738239 kubelet[3013]: I0120 01:37:30.734981 3013 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:37:30.751328 kubelet[3013]: I0120 01:37:30.750164 3013 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:37:30.807640 kubelet[3013]: I0120 01:37:30.807492 3013 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:37:30.807951 kubelet[3013]: I0120 01:37:30.807929 3013 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:37:30.858476 kubelet[3013]: I0120 01:37:30.850509 3013 server.go:1262] "Started kubelet" Jan 20 01:37:30.870025 kubelet[3013]: I0120 01:37:30.859407 3013 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:37:30.870025 kubelet[3013]: I0120 01:37:30.862705 3013 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:37:30.870025 kubelet[3013]: I0120 01:37:30.866848 3013 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:37:30.880664 kubelet[3013]: I0120 01:37:30.877995 3013 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:37:30.880664 kubelet[3013]: I0120 01:37:30.878306 3013 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:37:30.880664 kubelet[3013]: I0120 01:37:30.879893 3013 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:37:30.909481 kubelet[3013]: I0120 01:37:30.905668 3013 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:37:30.912248 kubelet[3013]: I0120 01:37:30.910868 3013 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:37:30.912248 kubelet[3013]: E0120 01:37:30.911064 3013 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:37:30.912248 kubelet[3013]: I0120 01:37:30.911726 3013 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:37:30.912248 kubelet[3013]: I0120 01:37:30.911958 3013 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:37:30.923886 kubelet[3013]: I0120 01:37:30.920992 3013 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:37:30.923886 kubelet[3013]: I0120 01:37:30.921150 3013 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:37:30.980434 kubelet[3013]: I0120 01:37:30.979577 3013 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:37:31.043191 kubelet[3013]: E0120 01:37:31.042695 3013 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:37:31.079810 kubelet[3013]: E0120 01:37:31.071560 3013 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:37:31.442376 kubelet[3013]: I0120 01:37:31.430628 3013 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:37:31.527838 kubelet[3013]: I0120 01:37:31.527773 3013 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:37:31.541087 kubelet[3013]: I0120 01:37:31.541038 3013 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:37:31.542592 kubelet[3013]: I0120 01:37:31.541548 3013 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:37:31.542592 kubelet[3013]: E0120 01:37:31.541768 3013 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:37:31.642584 kubelet[3013]: E0120 01:37:31.642464 3013 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:37:31.747034 kubelet[3013]: I0120 01:37:31.740500 3013 apiserver.go:52] "Watching apiserver" Jan 20 01:37:31.854118 kubelet[3013]: E0120 01:37:31.850151 3013 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:37:32.283817 kubelet[3013]: E0120 01:37:32.273538 3013 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:37:32.376981 kubelet[3013]: I0120 01:37:32.307991 3013 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:37:32.377369 kubelet[3013]: I0120 01:37:32.308475 3013 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:37:32.377560 kubelet[3013]: I0120 01:37:32.377535 3013 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:37:32.385355 kubelet[3013]: I0120 01:37:32.378075 3013 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:37:32.385594 kubelet[3013]: I0120 01:37:32.385522 3013 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:37:32.385765 kubelet[3013]: I0120 01:37:32.385747 3013 policy_none.go:49] "None policy: Start" Jan 20 01:37:32.385900 kubelet[3013]: I0120 01:37:32.385884 3013 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:37:32.386057 kubelet[3013]: I0120 01:37:32.386036 3013 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:37:32.386401 kubelet[3013]: I0120 01:37:32.386378 3013 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 01:37:32.386540 kubelet[3013]: I0120 01:37:32.386524 3013 policy_none.go:47] "Start" Jan 20 01:37:33.098305 kubelet[3013]: E0120 01:37:33.098248 3013 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:37:33.277322 kubelet[3013]: E0120 01:37:33.277133 3013 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:37:33.285535 kubelet[3013]: I0120 01:37:33.284740 3013 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:37:33.285535 kubelet[3013]: I0120 01:37:33.284766 3013 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:37:33.290440 kubelet[3013]: I0120 01:37:33.290419 3013 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:37:33.414443 kubelet[3013]: E0120 01:37:33.414300 3013 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:37:33.889487 kubelet[3013]: I0120 01:37:33.858070 3013 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:37:34.931336 kubelet[3013]: I0120 01:37:34.929845 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:37:34.931336 kubelet[3013]: I0120 01:37:34.930048 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:37:34.931336 kubelet[3013]: I0120 01:37:34.930086 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:37:34.931336 kubelet[3013]: I0120 01:37:34.930394 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e7d46329e3ccb894bd18634759b0844-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e7d46329e3ccb894bd18634759b0844\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:37:34.931336 kubelet[3013]: I0120 01:37:34.930418 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e7d46329e3ccb894bd18634759b0844-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e7d46329e3ccb894bd18634759b0844\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:37:34.970686 kubelet[3013]: I0120 01:37:34.930444 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e7d46329e3ccb894bd18634759b0844-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e7d46329e3ccb894bd18634759b0844\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:37:34.970686 kubelet[3013]: I0120 01:37:34.930467 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:37:34.970686 kubelet[3013]: I0120 01:37:34.930488 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:37:34.970686 kubelet[3013]: I0120 01:37:34.969526 3013 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:37:35.030789 kubelet[3013]: I0120 01:37:35.021318 3013 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 01:37:35.036754 kubelet[3013]: I0120 01:37:35.035433 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:37:35.334744 kubelet[3013]: E0120 01:37:35.333836 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:35.584323 kubelet[3013]: E0120 01:37:35.583695 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:37:36.346527 kubelet[3013]: I0120 01:37:36.345646 3013 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 01:37:36.346527 kubelet[3013]: I0120 01:37:36.345975 3013 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:37:36.346527 kubelet[3013]: I0120 01:37:36.346143 3013 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:37:36.421770 containerd[1601]: time="2026-01-20T01:37:36.412639754Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:37:36.521871 kubelet[3013]: I0120 01:37:36.517908 3013 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:37:48.002539 kernel: sched: DL replenish lagged too much Jan 20 01:37:50.125400 kubelet[3013]: E0120 01:37:50.124785 3013 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T01:37:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T01:37:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T01:37:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T01:37:36Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.36:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Jan 20 01:37:50.372068 systemd[1]: cri-containerd-0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d.scope: Deactivated successfully. Jan 20 01:37:50.373870 systemd[1]: cri-containerd-0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d.scope: Consumed 7.765s CPU time, 46.7M memory peak. Jan 20 01:37:50.599982 kubelet[3013]: E0120 01:37:50.599921 3013 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 01:37:50.788402 containerd[1601]: time="2026-01-20T01:37:50.788333723Z" level=info msg="received container exit event container_id:\"0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d\" id:\"0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d\" pid:2853 exit_status:1 exited_at:{seconds:1768873070 nanos:782823716}" Jan 20 01:37:50.889642 kubelet[3013]: E0120 01:37:50.829138 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.219s" Jan 20 01:37:51.111516 systemd[1]: cri-containerd-b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240.scope: Deactivated successfully. Jan 20 01:37:51.122541 systemd[1]: cri-containerd-b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240.scope: Consumed 9.649s CPU time, 21.5M memory peak. Jan 20 01:37:51.190111 containerd[1601]: time="2026-01-20T01:37:51.190006793Z" level=info msg="received container exit event container_id:\"b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240\" id:\"b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240\" pid:2834 exit_status:1 exited_at:{seconds:1768873071 nanos:182778709}" Jan 20 01:37:51.845125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d-rootfs.mount: Deactivated successfully. Jan 20 01:37:51.919116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240-rootfs.mount: Deactivated successfully. Jan 20 01:37:52.472035 kubelet[3013]: I0120 01:37:52.468932 3013 scope.go:117] "RemoveContainer" containerID="b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240" Jan 20 01:37:52.537567 kubelet[3013]: I0120 01:37:52.528682 3013 scope.go:117] "RemoveContainer" containerID="0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d" Jan 20 01:37:52.623765 containerd[1601]: time="2026-01-20T01:37:52.623699627Z" level=info msg="CreateContainer within sandbox \"d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 01:37:52.630294 containerd[1601]: time="2026-01-20T01:37:52.629921567Z" level=info msg="CreateContainer within sandbox \"41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 01:37:52.875535 containerd[1601]: time="2026-01-20T01:37:52.875302318Z" level=info msg="Container 149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:37:53.105450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069952710.mount: Deactivated successfully. Jan 20 01:37:53.176415 containerd[1601]: time="2026-01-20T01:37:53.171614052Z" level=info msg="CreateContainer within sandbox \"d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170\"" Jan 20 01:37:53.197304 containerd[1601]: time="2026-01-20T01:37:53.197177826Z" level=info msg="Container 61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:37:53.246669 containerd[1601]: time="2026-01-20T01:37:53.220991225Z" level=info msg="StartContainer for \"149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170\"" Jan 20 01:37:53.293838 containerd[1601]: time="2026-01-20T01:37:53.286080643Z" level=info msg="connecting to shim 149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170" address="unix:///run/containerd/s/6e91a995477f8793a4c1b3ef2e8a5c979bc5b41b26937d30bb37b0b4bc643b41" protocol=ttrpc version=3 Jan 20 01:37:53.421557 containerd[1601]: time="2026-01-20T01:37:53.421498744Z" level=info msg="CreateContainer within sandbox \"41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada\"" Jan 20 01:37:53.437081 containerd[1601]: time="2026-01-20T01:37:53.432412876Z" level=info msg="StartContainer for \"61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada\"" Jan 20 01:37:53.456648 containerd[1601]: time="2026-01-20T01:37:53.454680966Z" level=info msg="connecting to shim 61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada" address="unix:///run/containerd/s/9129f3959f5e9283d6a67cf2cc4ab3904fcb2a03c6dab64ed6b13386e87b76e3" protocol=ttrpc version=3 Jan 20 01:37:54.064735 systemd[1]: Started cri-containerd-149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170.scope - libcontainer container 149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170. Jan 20 01:37:54.331041 systemd[1]: Started cri-containerd-61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada.scope - libcontainer container 61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada. Jan 20 01:37:55.393169 containerd[1601]: time="2026-01-20T01:37:55.392794865Z" level=info msg="StartContainer for \"149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170\" returns successfully" Jan 20 01:37:55.794998 containerd[1601]: time="2026-01-20T01:37:55.794770786Z" level=info msg="StartContainer for \"61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada\" returns successfully" Jan 20 01:38:00.357841 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 20 01:38:00.443043 sshd[1762]: Connection closed by 10.0.0.1 port 42930 Jan 20 01:38:00.460505 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jan 20 01:38:00.511143 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:42930.service: Deactivated successfully. Jan 20 01:38:00.554848 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:38:00.556077 systemd[1]: session-7.scope: Consumed 21.228s CPU time, 237.3M memory peak. Jan 20 01:38:00.567353 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:38:00.590154 systemd-logind[1586]: Removed session 7. Jan 20 01:38:18.937678 kubelet[3013]: E0120 01:38:18.937483 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.43s" Jan 20 01:38:26.661525 kubelet[3013]: E0120 01:38:26.648570 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.922s" Jan 20 01:38:42.903316 kubelet[3013]: I0120 01:38:42.900799 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/acb1c9c4-d5f3-40a8-8927-f8385ed4d76c-kube-proxy\") pod \"kube-proxy-hvj6g\" (UID: \"acb1c9c4-d5f3-40a8-8927-f8385ed4d76c\") " pod="kube-system/kube-proxy-hvj6g" Jan 20 01:38:42.903316 kubelet[3013]: I0120 01:38:42.900855 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acb1c9c4-d5f3-40a8-8927-f8385ed4d76c-lib-modules\") pod \"kube-proxy-hvj6g\" (UID: \"acb1c9c4-d5f3-40a8-8927-f8385ed4d76c\") " pod="kube-system/kube-proxy-hvj6g" Jan 20 01:38:42.903316 kubelet[3013]: I0120 01:38:42.900878 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pgl5\" (UniqueName: \"kubernetes.io/projected/acb1c9c4-d5f3-40a8-8927-f8385ed4d76c-kube-api-access-6pgl5\") pod \"kube-proxy-hvj6g\" (UID: \"acb1c9c4-d5f3-40a8-8927-f8385ed4d76c\") " pod="kube-system/kube-proxy-hvj6g" Jan 20 01:38:42.903316 kubelet[3013]: I0120 01:38:42.900910 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acb1c9c4-d5f3-40a8-8927-f8385ed4d76c-xtables-lock\") pod \"kube-proxy-hvj6g\" (UID: \"acb1c9c4-d5f3-40a8-8927-f8385ed4d76c\") " pod="kube-system/kube-proxy-hvj6g" Jan 20 01:38:43.312174 systemd[1]: Created slice kubepods-besteffort-podacb1c9c4_d5f3_40a8_8927_f8385ed4d76c.slice - libcontainer container kubepods-besteffort-podacb1c9c4_d5f3_40a8_8927_f8385ed4d76c.slice. Jan 20 01:38:46.391547 systemd[1]: Created slice kubepods-burstable-podfd3f9256_1b69_42cf_aafc_b3e43745429c.slice - libcontainer container kubepods-burstable-podfd3f9256_1b69_42cf_aafc_b3e43745429c.slice. Jan 20 01:38:46.423554 kubelet[3013]: I0120 01:38:46.423404 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/fd3f9256-1b69-42cf-aafc-b3e43745429c-cni\") pod \"kube-flannel-ds-rx9w4\" (UID: \"fd3f9256-1b69-42cf-aafc-b3e43745429c\") " pod="kube-flannel/kube-flannel-ds-rx9w4" Jan 20 01:38:46.434862 kubelet[3013]: I0120 01:38:46.434820 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwdxs\" (UniqueName: \"kubernetes.io/projected/fd3f9256-1b69-42cf-aafc-b3e43745429c-kube-api-access-pwdxs\") pod \"kube-flannel-ds-rx9w4\" (UID: \"fd3f9256-1b69-42cf-aafc-b3e43745429c\") " pod="kube-flannel/kube-flannel-ds-rx9w4" Jan 20 01:38:46.435025 kubelet[3013]: I0120 01:38:46.435006 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fd3f9256-1b69-42cf-aafc-b3e43745429c-run\") pod \"kube-flannel-ds-rx9w4\" (UID: \"fd3f9256-1b69-42cf-aafc-b3e43745429c\") " pod="kube-flannel/kube-flannel-ds-rx9w4" Jan 20 01:38:46.435131 kubelet[3013]: I0120 01:38:46.435112 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/fd3f9256-1b69-42cf-aafc-b3e43745429c-flannel-cfg\") pod \"kube-flannel-ds-rx9w4\" (UID: \"fd3f9256-1b69-42cf-aafc-b3e43745429c\") " pod="kube-flannel/kube-flannel-ds-rx9w4" Jan 20 01:38:46.435298 kubelet[3013]: I0120 01:38:46.435277 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/fd3f9256-1b69-42cf-aafc-b3e43745429c-cni-plugin\") pod \"kube-flannel-ds-rx9w4\" (UID: \"fd3f9256-1b69-42cf-aafc-b3e43745429c\") " pod="kube-flannel/kube-flannel-ds-rx9w4" Jan 20 01:38:46.435397 kubelet[3013]: I0120 01:38:46.435380 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd3f9256-1b69-42cf-aafc-b3e43745429c-xtables-lock\") pod \"kube-flannel-ds-rx9w4\" (UID: \"fd3f9256-1b69-42cf-aafc-b3e43745429c\") " pod="kube-flannel/kube-flannel-ds-rx9w4" Jan 20 01:38:46.900138 kubelet[3013]: E0120 01:38:46.899596 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.325s" Jan 20 01:38:46.939808 containerd[1601]: time="2026-01-20T01:38:46.938894377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvj6g,Uid:acb1c9c4-d5f3-40a8-8927-f8385ed4d76c,Namespace:kube-system,Attempt:0,}" Jan 20 01:38:47.887569 containerd[1601]: time="2026-01-20T01:38:47.882935416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rx9w4,Uid:fd3f9256-1b69-42cf-aafc-b3e43745429c,Namespace:kube-flannel,Attempt:0,}" Jan 20 01:38:47.974796 containerd[1601]: time="2026-01-20T01:38:47.969611061Z" level=info msg="connecting to shim 993a6bf6257f98cc9109fdfd690f3cbff8a6b8176034ab3a64576d3fbf496c27" address="unix:///run/containerd/s/c6f37b5df001d4c72cc905b427b2572bf7eb587bbb4107c03cf6dddfad5a2051" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:38:48.454642 containerd[1601]: time="2026-01-20T01:38:48.454457535Z" level=info msg="connecting to shim c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4" address="unix:///run/containerd/s/f1eedab19d619c83489af80c0dec6d73edcc0256c2f270d15651bd5f892b67d7" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:38:48.596109 systemd[1]: Started cri-containerd-993a6bf6257f98cc9109fdfd690f3cbff8a6b8176034ab3a64576d3fbf496c27.scope - libcontainer container 993a6bf6257f98cc9109fdfd690f3cbff8a6b8176034ab3a64576d3fbf496c27. Jan 20 01:38:49.202598 systemd[1]: Started cri-containerd-c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4.scope - libcontainer container c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4. Jan 20 01:38:50.751390 containerd[1601]: time="2026-01-20T01:38:50.650910349Z" level=error msg="get state for 993a6bf6257f98cc9109fdfd690f3cbff8a6b8176034ab3a64576d3fbf496c27" error="context deadline exceeded" Jan 20 01:38:50.751390 containerd[1601]: time="2026-01-20T01:38:50.651040360Z" level=warning msg="unknown status" status=0 Jan 20 01:38:51.113447 containerd[1601]: time="2026-01-20T01:38:51.107657042Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:38:54.140106 containerd[1601]: time="2026-01-20T01:38:54.088109738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rx9w4,Uid:fd3f9256-1b69-42cf-aafc-b3e43745429c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4\"" Jan 20 01:38:54.140106 containerd[1601]: time="2026-01-20T01:38:54.099900685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvj6g,Uid:acb1c9c4-d5f3-40a8-8927-f8385ed4d76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"993a6bf6257f98cc9109fdfd690f3cbff8a6b8176034ab3a64576d3fbf496c27\"" Jan 20 01:38:54.304330 containerd[1601]: time="2026-01-20T01:38:54.302605876Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 20 01:38:54.353914 containerd[1601]: time="2026-01-20T01:38:54.353498251Z" level=info msg="CreateContainer within sandbox \"993a6bf6257f98cc9109fdfd690f3cbff8a6b8176034ab3a64576d3fbf496c27\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:38:55.083562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689337317.mount: Deactivated successfully. Jan 20 01:38:55.128068 containerd[1601]: time="2026-01-20T01:38:55.123455936Z" level=info msg="Container c3763aa918221c55458d870e5b009d608fd74089c9c4aa8d966324d6d35a96ce: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:38:55.293822 containerd[1601]: time="2026-01-20T01:38:55.292576587Z" level=info msg="CreateContainer within sandbox \"993a6bf6257f98cc9109fdfd690f3cbff8a6b8176034ab3a64576d3fbf496c27\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3763aa918221c55458d870e5b009d608fd74089c9c4aa8d966324d6d35a96ce\"" Jan 20 01:38:55.371011 containerd[1601]: time="2026-01-20T01:38:55.366456335Z" level=info msg="StartContainer for \"c3763aa918221c55458d870e5b009d608fd74089c9c4aa8d966324d6d35a96ce\"" Jan 20 01:38:55.389602 containerd[1601]: time="2026-01-20T01:38:55.389545106Z" level=info msg="connecting to shim c3763aa918221c55458d870e5b009d608fd74089c9c4aa8d966324d6d35a96ce" address="unix:///run/containerd/s/c6f37b5df001d4c72cc905b427b2572bf7eb587bbb4107c03cf6dddfad5a2051" protocol=ttrpc version=3 Jan 20 01:38:55.825173 systemd[1]: Started cri-containerd-c3763aa918221c55458d870e5b009d608fd74089c9c4aa8d966324d6d35a96ce.scope - libcontainer container c3763aa918221c55458d870e5b009d608fd74089c9c4aa8d966324d6d35a96ce. Jan 20 01:38:57.189181 containerd[1601]: time="2026-01-20T01:38:57.184294118Z" level=info msg="StartContainer for \"c3763aa918221c55458d870e5b009d608fd74089c9c4aa8d966324d6d35a96ce\" returns successfully" Jan 20 01:38:58.635100 kubelet[3013]: I0120 01:38:58.627050 3013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hvj6g" podStartSLOduration=17.626854476 podStartE2EDuration="17.626854476s" podCreationTimestamp="2026-01-20 01:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:38:58.597179805 +0000 UTC m=+90.414694007" watchObservedRunningTime="2026-01-20 01:38:58.626854476 +0000 UTC m=+90.444368687" Jan 20 01:39:01.773651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184127955.mount: Deactivated successfully. Jan 20 01:39:05.051417 containerd[1601]: time="2026-01-20T01:39:05.050810933Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jan 20 01:39:05.054496 containerd[1601]: time="2026-01-20T01:39:05.054457608Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:05.099116 containerd[1601]: time="2026-01-20T01:39:05.099059763Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:05.145167 containerd[1601]: time="2026-01-20T01:39:05.144945405Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:05.151167 containerd[1601]: time="2026-01-20T01:39:05.148961993Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 10.846293719s" Jan 20 01:39:05.151167 containerd[1601]: time="2026-01-20T01:39:05.149007217Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 20 01:39:05.590287 containerd[1601]: time="2026-01-20T01:39:05.589725710Z" level=info msg="CreateContainer within sandbox \"c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 20 01:39:06.601013 containerd[1601]: time="2026-01-20T01:39:06.586938778Z" level=info msg="Container bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:39:06.841534 kubelet[3013]: E0120 01:39:06.838917 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.256s" Jan 20 01:39:08.233271 containerd[1601]: time="2026-01-20T01:39:08.225085529Z" level=info msg="CreateContainer within sandbox \"c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40\"" Jan 20 01:39:08.417546 containerd[1601]: time="2026-01-20T01:39:08.413377881Z" level=info msg="StartContainer for \"bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40\"" Jan 20 01:39:08.439328 containerd[1601]: time="2026-01-20T01:39:08.431043757Z" level=info msg="connecting to shim bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40" address="unix:///run/containerd/s/f1eedab19d619c83489af80c0dec6d73edcc0256c2f270d15651bd5f892b67d7" protocol=ttrpc version=3 Jan 20 01:39:10.098496 systemd[1]: Started cri-containerd-bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40.scope - libcontainer container bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40. Jan 20 01:39:12.345396 systemd[1]: cri-containerd-bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40.scope: Deactivated successfully. Jan 20 01:39:12.883864 containerd[1601]: time="2026-01-20T01:39:12.879312225Z" level=info msg="received container exit event container_id:\"bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40\" id:\"bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40\" pid:3402 exited_at:{seconds:1768873152 nanos:846725733}" Jan 20 01:39:13.324177 containerd[1601]: time="2026-01-20T01:39:13.318305055Z" level=info msg="StartContainer for \"bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40\" returns successfully" Jan 20 01:39:13.874620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40-rootfs.mount: Deactivated successfully. Jan 20 01:39:14.377606 containerd[1601]: time="2026-01-20T01:39:14.376750370Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 20 01:39:32.892610 kubelet[3013]: E0120 01:39:32.890869 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.311s" Jan 20 01:39:32.937180 kubelet[3013]: E0120 01:39:32.936817 3013 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Jan 20 01:39:35.248616 kubelet[3013]: E0120 01:39:35.241425 3013 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:39:40.285301 kubelet[3013]: E0120 01:39:40.284835 3013 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:39:43.889264 containerd[1601]: time="2026-01-20T01:39:43.886892490Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:43.897097 containerd[1601]: time="2026-01-20T01:39:43.896058015Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jan 20 01:39:43.909839 containerd[1601]: time="2026-01-20T01:39:43.908846434Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:43.955317 containerd[1601]: time="2026-01-20T01:39:43.951988099Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:39:43.996262 containerd[1601]: time="2026-01-20T01:39:43.994127228Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 29.617272973s" Jan 20 01:39:43.996262 containerd[1601]: time="2026-01-20T01:39:43.994186708Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 20 01:39:44.073039 containerd[1601]: time="2026-01-20T01:39:44.072977602Z" level=info msg="CreateContainer within sandbox \"c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:39:44.518856 containerd[1601]: time="2026-01-20T01:39:44.517792570Z" level=info msg="Container a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:39:44.651290 containerd[1601]: time="2026-01-20T01:39:44.646285210Z" level=info msg="CreateContainer within sandbox \"c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626\"" Jan 20 01:39:44.676141 containerd[1601]: time="2026-01-20T01:39:44.673903637Z" level=info msg="StartContainer for \"a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626\"" Jan 20 01:39:44.676141 containerd[1601]: time="2026-01-20T01:39:44.675501339Z" level=info msg="connecting to shim a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626" address="unix:///run/containerd/s/f1eedab19d619c83489af80c0dec6d73edcc0256c2f270d15651bd5f892b67d7" protocol=ttrpc version=3 Jan 20 01:39:45.151018 systemd[1]: Started cri-containerd-a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626.scope - libcontainer container a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626. Jan 20 01:39:45.317900 kubelet[3013]: E0120 01:39:45.311191 3013 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:39:45.850184 containerd[1601]: time="2026-01-20T01:39:45.819561494Z" level=info msg="StartContainer for \"a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626\" returns successfully" Jan 20 01:39:45.844365 systemd[1]: cri-containerd-a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626.scope: Deactivated successfully. Jan 20 01:39:45.853263 containerd[1601]: time="2026-01-20T01:39:45.853106619Z" level=info msg="received container exit event container_id:\"a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626\" id:\"a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626\" pid:3546 exited_at:{seconds:1768873185 nanos:852717388}" Jan 20 01:39:46.408942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626-rootfs.mount: Deactivated successfully. Jan 20 01:39:47.348800 containerd[1601]: time="2026-01-20T01:39:47.344146552Z" level=info msg="CreateContainer within sandbox \"c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 20 01:39:47.690340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658984346.mount: Deactivated successfully. Jan 20 01:39:47.818013 containerd[1601]: time="2026-01-20T01:39:47.812071027Z" level=info msg="Container e69d76da5ef623e29667e5019870a88d9ee13f38977769415d4ee5a80ce4b0e6: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:39:47.981150 containerd[1601]: time="2026-01-20T01:39:47.979541508Z" level=info msg="CreateContainer within sandbox \"c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e69d76da5ef623e29667e5019870a88d9ee13f38977769415d4ee5a80ce4b0e6\"" Jan 20 01:39:48.016311 containerd[1601]: time="2026-01-20T01:39:48.013454805Z" level=info msg="StartContainer for \"e69d76da5ef623e29667e5019870a88d9ee13f38977769415d4ee5a80ce4b0e6\"" Jan 20 01:39:48.053142 containerd[1601]: time="2026-01-20T01:39:48.041439408Z" level=info msg="connecting to shim e69d76da5ef623e29667e5019870a88d9ee13f38977769415d4ee5a80ce4b0e6" address="unix:///run/containerd/s/f1eedab19d619c83489af80c0dec6d73edcc0256c2f270d15651bd5f892b67d7" protocol=ttrpc version=3 Jan 20 01:39:48.631511 systemd[1]: Started cri-containerd-e69d76da5ef623e29667e5019870a88d9ee13f38977769415d4ee5a80ce4b0e6.scope - libcontainer container e69d76da5ef623e29667e5019870a88d9ee13f38977769415d4ee5a80ce4b0e6. Jan 20 01:39:49.670381 containerd[1601]: time="2026-01-20T01:39:49.636566457Z" level=info msg="StartContainer for \"e69d76da5ef623e29667e5019870a88d9ee13f38977769415d4ee5a80ce4b0e6\" returns successfully" Jan 20 01:39:51.052417 kubelet[3013]: I0120 01:39:51.051517 3013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rx9w4" podStartSLOduration=19.317182597 podStartE2EDuration="1m9.051434755s" podCreationTimestamp="2026-01-20 01:38:42 +0000 UTC" firstStartedPulling="2026-01-20 01:38:54.282578402 +0000 UTC m=+86.100092584" lastFinishedPulling="2026-01-20 01:39:44.016830561 +0000 UTC m=+135.834344742" observedRunningTime="2026-01-20 01:39:51.042311463 +0000 UTC m=+142.859825665" watchObservedRunningTime="2026-01-20 01:39:51.051434755 +0000 UTC m=+142.868948937" Jan 20 01:39:53.469054 systemd-networkd[1482]: flannel.1: Link UP Jan 20 01:39:53.469098 systemd-networkd[1482]: flannel.1: Gained carrier Jan 20 01:39:54.413484 kubelet[3013]: I0120 01:39:54.413441 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97eacdd8-bf55-43af-99d8-574b4235b986-config-volume\") pod \"coredns-66bc5c9577-zq6b8\" (UID: \"97eacdd8-bf55-43af-99d8-574b4235b986\") " pod="kube-system/coredns-66bc5c9577-zq6b8" Jan 20 01:39:54.417652 kubelet[3013]: I0120 01:39:54.414683 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsn4j\" (UniqueName: \"kubernetes.io/projected/97eacdd8-bf55-43af-99d8-574b4235b986-kube-api-access-xsn4j\") pod \"coredns-66bc5c9577-zq6b8\" (UID: \"97eacdd8-bf55-43af-99d8-574b4235b986\") " pod="kube-system/coredns-66bc5c9577-zq6b8" Jan 20 01:39:54.417652 kubelet[3013]: I0120 01:39:54.414728 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqn2\" (UniqueName: \"kubernetes.io/projected/a41aa50d-a93c-4356-8114-afce95dab8de-kube-api-access-5cqn2\") pod \"coredns-66bc5c9577-wk85m\" (UID: \"a41aa50d-a93c-4356-8114-afce95dab8de\") " pod="kube-system/coredns-66bc5c9577-wk85m" Jan 20 01:39:54.417652 kubelet[3013]: I0120 01:39:54.414834 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a41aa50d-a93c-4356-8114-afce95dab8de-config-volume\") pod \"coredns-66bc5c9577-wk85m\" (UID: \"a41aa50d-a93c-4356-8114-afce95dab8de\") " pod="kube-system/coredns-66bc5c9577-wk85m" Jan 20 01:39:54.422070 systemd[1]: Created slice kubepods-burstable-poda41aa50d_a93c_4356_8114_afce95dab8de.slice - libcontainer container kubepods-burstable-poda41aa50d_a93c_4356_8114_afce95dab8de.slice. Jan 20 01:39:54.430463 systemd[1]: Created slice kubepods-burstable-pod97eacdd8_bf55_43af_99d8_574b4235b986.slice - libcontainer container kubepods-burstable-pod97eacdd8_bf55_43af_99d8_574b4235b986.slice. Jan 20 01:39:54.562733 systemd-networkd[1482]: flannel.1: Gained IPv6LL Jan 20 01:39:54.831267 containerd[1601]: time="2026-01-20T01:39:54.830673213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wk85m,Uid:a41aa50d-a93c-4356-8114-afce95dab8de,Namespace:kube-system,Attempt:0,}" Jan 20 01:39:54.831267 containerd[1601]: time="2026-01-20T01:39:54.831334214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zq6b8,Uid:97eacdd8-bf55-43af-99d8-574b4235b986,Namespace:kube-system,Attempt:0,}" Jan 20 01:39:55.241637 systemd-networkd[1482]: cni0: Link UP Jan 20 01:39:55.241644 systemd-networkd[1482]: cni0: Gained carrier Jan 20 01:39:55.474416 systemd-networkd[1482]: cni0: Lost carrier Jan 20 01:39:56.287785 systemd-networkd[1482]: veth8b041112: Link UP Jan 20 01:39:56.387042 kernel: cni0: port 1(veth8b041112) entered blocking state Jan 20 01:39:56.387177 kernel: cni0: port 1(veth8b041112) entered disabled state Jan 20 01:39:56.413532 kernel: veth8b041112: entered allmulticast mode Jan 20 01:39:56.413658 kernel: veth8b041112: entered promiscuous mode Jan 20 01:39:56.493709 systemd-networkd[1482]: vethb702ed09: Link UP Jan 20 01:39:56.533005 kernel: cni0: port 2(vethb702ed09) entered blocking state Jan 20 01:39:56.533418 kernel: cni0: port 2(vethb702ed09) entered disabled state Jan 20 01:39:56.533638 kernel: vethb702ed09: entered allmulticast mode Jan 20 01:39:56.574311 kernel: vethb702ed09: entered promiscuous mode Jan 20 01:39:56.649704 kernel: cni0: port 1(veth8b041112) entered blocking state Jan 20 01:39:56.649925 kernel: cni0: port 1(veth8b041112) entered forwarding state Jan 20 01:39:56.651155 systemd-networkd[1482]: veth8b041112: Gained carrier Jan 20 01:39:56.651671 systemd-networkd[1482]: cni0: Gained carrier Jan 20 01:39:56.769494 kernel: cni0: port 2(vethb702ed09) entered blocking state Jan 20 01:39:56.769658 kernel: cni0: port 2(vethb702ed09) entered forwarding state Jan 20 01:39:56.835798 systemd-networkd[1482]: vethb702ed09: Gained carrier Jan 20 01:39:56.962369 containerd[1601]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0001007d0), "name":"cbr0", "type":"bridge"} Jan 20 01:39:56.962369 containerd[1601]: delegateAdd: netconf sent to delegate plugin: Jan 20 01:39:56.972525 containerd[1601]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 20 01:39:56.972525 containerd[1601]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000102930), "name":"cbr0", "type":"bridge"} Jan 20 01:39:56.972525 containerd[1601]: delegateAdd: netconf sent to delegate plugin: Jan 20 01:39:57.000478 systemd-networkd[1482]: cni0: Gained IPv6LL Jan 20 01:39:57.799851 containerd[1601]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-20T01:39:57.788025949Z" level=info msg="connecting to shim 4682ed9268b42b07c63b128ed15499ee2ab89c05bf6d6bc9fc35ffe7e2d38ba3" address="unix:///run/containerd/s/ac1b379b4bbe97da5ddbba754c52638635dd2f8594d799d911af5106848bf1bd" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:39:57.825249 containerd[1601]: time="2026-01-20T01:39:57.825047497Z" level=info msg="connecting to shim b19b528766a88f63e8907a18ca7fbe7967e935d559e9fe538840698044ffbab4" address="unix:///run/containerd/s/4ae549943c881aa74c0a354f4e9c64257c5a95b66cfa274ad6c0e5475c3a6a92" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:39:58.446098 systemd-networkd[1482]: vethb702ed09: Gained IPv6LL Jan 20 01:39:58.446630 systemd[1]: Started cri-containerd-4682ed9268b42b07c63b128ed15499ee2ab89c05bf6d6bc9fc35ffe7e2d38ba3.scope - libcontainer container 4682ed9268b42b07c63b128ed15499ee2ab89c05bf6d6bc9fc35ffe7e2d38ba3. Jan 20 01:39:58.607430 systemd[1]: Started cri-containerd-b19b528766a88f63e8907a18ca7fbe7967e935d559e9fe538840698044ffbab4.scope - libcontainer container b19b528766a88f63e8907a18ca7fbe7967e935d559e9fe538840698044ffbab4. Jan 20 01:39:58.659605 systemd-networkd[1482]: veth8b041112: Gained IPv6LL Jan 20 01:40:00.110459 systemd-resolved[1396]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:40:00.173721 systemd-resolved[1396]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:40:00.305043 containerd[1601]: time="2026-01-20T01:40:00.300145508Z" level=error msg="get state for 4682ed9268b42b07c63b128ed15499ee2ab89c05bf6d6bc9fc35ffe7e2d38ba3" error="context deadline exceeded" Jan 20 01:40:00.305043 containerd[1601]: time="2026-01-20T01:40:00.300542555Z" level=warning msg="unknown status" status=0 Jan 20 01:40:00.386545 containerd[1601]: time="2026-01-20T01:40:00.386338338Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:40:00.668768 containerd[1601]: time="2026-01-20T01:40:00.668475614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wk85m,Uid:a41aa50d-a93c-4356-8114-afce95dab8de,Namespace:kube-system,Attempt:0,} returns sandbox id \"4682ed9268b42b07c63b128ed15499ee2ab89c05bf6d6bc9fc35ffe7e2d38ba3\"" Jan 20 01:40:00.841649 containerd[1601]: time="2026-01-20T01:40:00.835757980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zq6b8,Uid:97eacdd8-bf55-43af-99d8-574b4235b986,Namespace:kube-system,Attempt:0,} returns sandbox id \"b19b528766a88f63e8907a18ca7fbe7967e935d559e9fe538840698044ffbab4\"" Jan 20 01:40:01.032347 containerd[1601]: time="2026-01-20T01:40:01.032086394Z" level=info msg="CreateContainer within sandbox \"4682ed9268b42b07c63b128ed15499ee2ab89c05bf6d6bc9fc35ffe7e2d38ba3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:40:01.178673 containerd[1601]: time="2026-01-20T01:40:01.175308122Z" level=info msg="CreateContainer within sandbox \"b19b528766a88f63e8907a18ca7fbe7967e935d559e9fe538840698044ffbab4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:40:01.551550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2759164948.mount: Deactivated successfully. Jan 20 01:40:01.684284 containerd[1601]: time="2026-01-20T01:40:01.614968891Z" level=info msg="Container 437524d2597bc3fe23fb789634e93ca424eecbcf5081c21a9ba391b7abf6ccd9: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:40:01.724348 containerd[1601]: time="2026-01-20T01:40:01.723321730Z" level=info msg="Container 5778948b29b956c82229c5a9571a877bc546bcd28e6ca7ffcdd230c4ff7a0fb3: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:40:01.837932 containerd[1601]: time="2026-01-20T01:40:01.833938219Z" level=info msg="CreateContainer within sandbox \"4682ed9268b42b07c63b128ed15499ee2ab89c05bf6d6bc9fc35ffe7e2d38ba3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"437524d2597bc3fe23fb789634e93ca424eecbcf5081c21a9ba391b7abf6ccd9\"" Jan 20 01:40:01.875383 containerd[1601]: time="2026-01-20T01:40:01.851028546Z" level=info msg="StartContainer for \"437524d2597bc3fe23fb789634e93ca424eecbcf5081c21a9ba391b7abf6ccd9\"" Jan 20 01:40:01.894411 containerd[1601]: time="2026-01-20T01:40:01.894331841Z" level=info msg="CreateContainer within sandbox \"b19b528766a88f63e8907a18ca7fbe7967e935d559e9fe538840698044ffbab4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5778948b29b956c82229c5a9571a877bc546bcd28e6ca7ffcdd230c4ff7a0fb3\"" Jan 20 01:40:01.895490 containerd[1601]: time="2026-01-20T01:40:01.895451087Z" level=info msg="StartContainer for \"5778948b29b956c82229c5a9571a877bc546bcd28e6ca7ffcdd230c4ff7a0fb3\"" Jan 20 01:40:01.906649 containerd[1601]: time="2026-01-20T01:40:01.906586777Z" level=info msg="connecting to shim 437524d2597bc3fe23fb789634e93ca424eecbcf5081c21a9ba391b7abf6ccd9" address="unix:///run/containerd/s/ac1b379b4bbe97da5ddbba754c52638635dd2f8594d799d911af5106848bf1bd" protocol=ttrpc version=3 Jan 20 01:40:01.921856 containerd[1601]: time="2026-01-20T01:40:01.921720972Z" level=info msg="connecting to shim 5778948b29b956c82229c5a9571a877bc546bcd28e6ca7ffcdd230c4ff7a0fb3" address="unix:///run/containerd/s/4ae549943c881aa74c0a354f4e9c64257c5a95b66cfa274ad6c0e5475c3a6a92" protocol=ttrpc version=3 Jan 20 01:40:02.318622 systemd[1]: Started cri-containerd-5778948b29b956c82229c5a9571a877bc546bcd28e6ca7ffcdd230c4ff7a0fb3.scope - libcontainer container 5778948b29b956c82229c5a9571a877bc546bcd28e6ca7ffcdd230c4ff7a0fb3. Jan 20 01:40:02.690491 systemd[1]: Started cri-containerd-437524d2597bc3fe23fb789634e93ca424eecbcf5081c21a9ba391b7abf6ccd9.scope - libcontainer container 437524d2597bc3fe23fb789634e93ca424eecbcf5081c21a9ba391b7abf6ccd9. Jan 20 01:40:03.730031 containerd[1601]: time="2026-01-20T01:40:03.729826611Z" level=info msg="StartContainer for \"5778948b29b956c82229c5a9571a877bc546bcd28e6ca7ffcdd230c4ff7a0fb3\" returns successfully" Jan 20 01:40:03.928612 containerd[1601]: time="2026-01-20T01:40:03.917407909Z" level=info msg="StartContainer for \"437524d2597bc3fe23fb789634e93ca424eecbcf5081c21a9ba391b7abf6ccd9\" returns successfully" Jan 20 01:40:04.439284 kubelet[3013]: I0120 01:40:04.436580 3013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zq6b8" podStartSLOduration=84.436463594 podStartE2EDuration="1m24.436463594s" podCreationTimestamp="2026-01-20 01:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:40:04.401511073 +0000 UTC m=+156.219025275" watchObservedRunningTime="2026-01-20 01:40:04.436463594 +0000 UTC m=+156.253977786" Jan 20 01:40:05.510393 kubelet[3013]: I0120 01:40:05.502856 3013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wk85m" podStartSLOduration=84.502830062 podStartE2EDuration="1m24.502830062s" podCreationTimestamp="2026-01-20 01:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:40:04.915357967 +0000 UTC m=+156.732872149" watchObservedRunningTime="2026-01-20 01:40:05.502830062 +0000 UTC m=+157.320344263" Jan 20 01:40:19.134971 containerd[1601]: time="2026-01-20T01:40:19.133988269Z" level=warning msg="container event discarded" container=41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7 type=CONTAINER_CREATED_EVENT Jan 20 01:40:19.189707 containerd[1601]: time="2026-01-20T01:40:19.187096605Z" level=warning msg="container event discarded" container=41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7 type=CONTAINER_STARTED_EVENT Jan 20 01:40:19.308489 containerd[1601]: time="2026-01-20T01:40:19.308406196Z" level=warning msg="container event discarded" container=373f3034efc7fb59ffd2e811cfc795250b74df27d657adf5b691027c6e2517b4 type=CONTAINER_CREATED_EVENT Jan 20 01:40:19.308910 containerd[1601]: time="2026-01-20T01:40:19.308865282Z" level=warning msg="container event discarded" container=373f3034efc7fb59ffd2e811cfc795250b74df27d657adf5b691027c6e2517b4 type=CONTAINER_STARTED_EVENT Jan 20 01:40:19.997550 containerd[1601]: time="2026-01-20T01:40:19.997363784Z" level=warning msg="container event discarded" container=d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387 type=CONTAINER_CREATED_EVENT Jan 20 01:40:19.997550 containerd[1601]: time="2026-01-20T01:40:19.997471106Z" level=warning msg="container event discarded" container=d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387 type=CONTAINER_STARTED_EVENT Jan 20 01:40:21.599499 containerd[1601]: time="2026-01-20T01:40:21.554472752Z" level=warning msg="container event discarded" container=b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240 type=CONTAINER_CREATED_EVENT Jan 20 01:40:22.298038 containerd[1601]: time="2026-01-20T01:40:22.295309885Z" level=warning msg="container event discarded" container=1a5ec8c2fc44209b5d08955dac3c6fcf615f419f4bff92813ef63bf4d4d8ea50 type=CONTAINER_CREATED_EVENT Jan 20 01:40:22.607888 containerd[1601]: time="2026-01-20T01:40:22.607337782Z" level=warning msg="container event discarded" container=0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d type=CONTAINER_CREATED_EVENT Jan 20 01:40:27.349564 containerd[1601]: time="2026-01-20T01:40:27.349448449Z" level=warning msg="container event discarded" container=b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240 type=CONTAINER_STARTED_EVENT Jan 20 01:40:29.657072 containerd[1601]: time="2026-01-20T01:40:29.647515347Z" level=warning msg="container event discarded" container=0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d type=CONTAINER_STARTED_EVENT Jan 20 01:40:29.975308 containerd[1601]: time="2026-01-20T01:40:29.975024738Z" level=warning msg="container event discarded" container=1a5ec8c2fc44209b5d08955dac3c6fcf615f419f4bff92813ef63bf4d4d8ea50 type=CONTAINER_STARTED_EVENT Jan 20 01:41:24.642090 kubelet[3013]: E0120 01:41:24.639944 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.088s" Jan 20 01:41:28.135275 kubelet[3013]: E0120 01:41:28.015609 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.461s" Jan 20 01:41:31.788348 kubelet[3013]: E0120 01:41:31.761441 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.549s" Jan 20 01:41:34.755395 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 01:41:36.419439 kubelet[3013]: E0120 01:41:36.412097 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.793s" Jan 20 01:41:39.169139 kubelet[3013]: E0120 01:41:39.168977 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.757s" Jan 20 01:41:39.311547 systemd-tmpfiles[4250]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:41:39.311584 systemd-tmpfiles[4250]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:41:39.312112 systemd-tmpfiles[4250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:41:39.360608 systemd-tmpfiles[4250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:41:39.390876 systemd-tmpfiles[4250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:41:39.408390 systemd-tmpfiles[4250]: ACLs are not supported, ignoring. Jan 20 01:41:39.408571 systemd-tmpfiles[4250]: ACLs are not supported, ignoring. Jan 20 01:41:39.479064 systemd-tmpfiles[4250]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:41:39.479088 systemd-tmpfiles[4250]: Skipping /boot Jan 20 01:41:39.685852 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 01:41:39.686541 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 01:41:39.790154 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jan 20 01:41:48.326538 kubelet[3013]: E0120 01:41:48.322997 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.795s" Jan 20 01:42:00.300545 kubelet[3013]: E0120 01:42:00.294085 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.644s" Jan 20 01:42:05.060507 kubelet[3013]: E0120 01:42:05.036375 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.48s" Jan 20 01:42:19.521535 kubelet[3013]: E0120 01:42:19.519403 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.794s" Jan 20 01:42:20.745685 kubelet[3013]: E0120 01:42:20.733841 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.186s" Jan 20 01:42:52.023857 containerd[1601]: time="2026-01-20T01:42:52.008594310Z" level=warning msg="container event discarded" container=0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d type=CONTAINER_STOPPED_EVENT Jan 20 01:42:52.063481 containerd[1601]: time="2026-01-20T01:42:52.062813589Z" level=warning msg="container event discarded" container=b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240 type=CONTAINER_STOPPED_EVENT Jan 20 01:42:53.169154 containerd[1601]: time="2026-01-20T01:42:53.169041620Z" level=warning msg="container event discarded" container=149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170 type=CONTAINER_CREATED_EVENT Jan 20 01:42:53.449970 containerd[1601]: time="2026-01-20T01:42:53.432123958Z" level=warning msg="container event discarded" container=61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada type=CONTAINER_CREATED_EVENT Jan 20 01:42:55.384029 containerd[1601]: time="2026-01-20T01:42:55.379125526Z" level=warning msg="container event discarded" container=149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170 type=CONTAINER_STARTED_EVENT Jan 20 01:42:55.777023 containerd[1601]: time="2026-01-20T01:42:55.775087949Z" level=warning msg="container event discarded" container=61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada type=CONTAINER_STARTED_EVENT Jan 20 01:43:54.097957 containerd[1601]: time="2026-01-20T01:43:54.096034732Z" level=warning msg="container event discarded" container=993a6bf6257f98cc9109fdfd690f3cbff8a6b8176034ab3a64576d3fbf496c27 type=CONTAINER_CREATED_EVENT Jan 20 01:43:54.097957 containerd[1601]: time="2026-01-20T01:43:54.096170401Z" level=warning msg="container event discarded" container=993a6bf6257f98cc9109fdfd690f3cbff8a6b8176034ab3a64576d3fbf496c27 type=CONTAINER_STARTED_EVENT Jan 20 01:43:54.097957 containerd[1601]: time="2026-01-20T01:43:54.096190259Z" level=warning msg="container event discarded" container=c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4 type=CONTAINER_CREATED_EVENT Jan 20 01:43:54.097957 containerd[1601]: time="2026-01-20T01:43:54.096302202Z" level=warning msg="container event discarded" container=c041f4658330e399c727c436300ee44e8cb0e0c38416369c68b685f6d7150bd4 type=CONTAINER_STARTED_EVENT Jan 20 01:43:55.305358 containerd[1601]: time="2026-01-20T01:43:55.304750190Z" level=warning msg="container event discarded" container=c3763aa918221c55458d870e5b009d608fd74089c9c4aa8d966324d6d35a96ce type=CONTAINER_CREATED_EVENT Jan 20 01:43:56.853941 containerd[1601]: time="2026-01-20T01:43:56.853847252Z" level=warning msg="container event discarded" container=c3763aa918221c55458d870e5b009d608fd74089c9c4aa8d966324d6d35a96ce type=CONTAINER_STARTED_EVENT Jan 20 01:44:08.167009 containerd[1601]: time="2026-01-20T01:44:08.166396511Z" level=warning msg="container event discarded" container=bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40 type=CONTAINER_CREATED_EVENT Jan 20 01:44:12.836272 containerd[1601]: time="2026-01-20T01:44:12.835668599Z" level=warning msg="container event discarded" container=bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40 type=CONTAINER_STARTED_EVENT Jan 20 01:44:14.035028 containerd[1601]: time="2026-01-20T01:44:14.034636451Z" level=warning msg="container event discarded" container=bedc08108565450af9dec08e7550441108041c15980b97137a0ec1ba86d03c40 type=CONTAINER_STOPPED_EVENT Jan 20 01:44:44.653254 containerd[1601]: time="2026-01-20T01:44:44.652400034Z" level=warning msg="container event discarded" container=a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626 type=CONTAINER_CREATED_EVENT Jan 20 01:44:45.825112 containerd[1601]: time="2026-01-20T01:44:45.821671318Z" level=warning msg="container event discarded" container=a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626 type=CONTAINER_STARTED_EVENT Jan 20 01:44:46.681994 containerd[1601]: time="2026-01-20T01:44:46.681773345Z" level=warning msg="container event discarded" container=a9816ebdc76947fc0db3a052a5c5477a2b58a7d500f2934435711f5919066626 type=CONTAINER_STOPPED_EVENT Jan 20 01:44:47.963616 containerd[1601]: time="2026-01-20T01:44:47.963310838Z" level=warning msg="container event discarded" container=e69d76da5ef623e29667e5019870a88d9ee13f38977769415d4ee5a80ce4b0e6 type=CONTAINER_CREATED_EVENT Jan 20 01:44:49.436350 containerd[1601]: time="2026-01-20T01:44:49.433596646Z" level=warning msg="container event discarded" container=e69d76da5ef623e29667e5019870a88d9ee13f38977769415d4ee5a80ce4b0e6 type=CONTAINER_STARTED_EVENT Jan 20 01:44:55.138188 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:50184.service - OpenSSH per-connection server daemon (10.0.0.1:50184). Jan 20 01:44:55.649592 sshd[4938]: Accepted publickey for core from 10.0.0.1 port 50184 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:44:55.656439 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:55.735148 systemd-logind[1586]: New session 8 of user core. Jan 20 01:44:55.762794 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:44:57.317934 sshd[4941]: Connection closed by 10.0.0.1 port 50184 Jan 20 01:44:57.319501 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:57.334920 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:50184.service: Deactivated successfully. Jan 20 01:44:57.351342 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:44:57.362584 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:44:57.392767 systemd-logind[1586]: Removed session 8. Jan 20 01:45:00.695055 containerd[1601]: time="2026-01-20T01:45:00.680836612Z" level=warning msg="container event discarded" container=4682ed9268b42b07c63b128ed15499ee2ab89c05bf6d6bc9fc35ffe7e2d38ba3 type=CONTAINER_CREATED_EVENT Jan 20 01:45:00.705073 containerd[1601]: time="2026-01-20T01:45:00.695181617Z" level=warning msg="container event discarded" container=4682ed9268b42b07c63b128ed15499ee2ab89c05bf6d6bc9fc35ffe7e2d38ba3 type=CONTAINER_STARTED_EVENT Jan 20 01:45:00.846355 containerd[1601]: time="2026-01-20T01:45:00.846173462Z" level=warning msg="container event discarded" container=b19b528766a88f63e8907a18ca7fbe7967e935d559e9fe538840698044ffbab4 type=CONTAINER_CREATED_EVENT Jan 20 01:45:00.846355 containerd[1601]: time="2026-01-20T01:45:00.846312598Z" level=warning msg="container event discarded" container=b19b528766a88f63e8907a18ca7fbe7967e935d559e9fe538840698044ffbab4 type=CONTAINER_STARTED_EVENT Jan 20 01:45:01.842835 containerd[1601]: time="2026-01-20T01:45:01.842725814Z" level=warning msg="container event discarded" container=437524d2597bc3fe23fb789634e93ca424eecbcf5081c21a9ba391b7abf6ccd9 type=CONTAINER_CREATED_EVENT Jan 20 01:45:01.883978 containerd[1601]: time="2026-01-20T01:45:01.881972143Z" level=warning msg="container event discarded" container=5778948b29b956c82229c5a9571a877bc546bcd28e6ca7ffcdd230c4ff7a0fb3 type=CONTAINER_CREATED_EVENT Jan 20 01:45:02.379545 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:50210.service - OpenSSH per-connection server daemon (10.0.0.1:50210). Jan 20 01:45:02.756990 sshd[4981]: Accepted publickey for core from 10.0.0.1 port 50210 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:02.747776 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:02.822185 systemd-logind[1586]: New session 9 of user core. Jan 20 01:45:02.845313 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:45:03.393961 sshd[4984]: Connection closed by 10.0.0.1 port 50210 Jan 20 01:45:03.392555 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:03.438747 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:50210.service: Deactivated successfully. Jan 20 01:45:03.443443 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:45:03.468914 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:45:03.491625 systemd-logind[1586]: Removed session 9. Jan 20 01:45:03.743342 containerd[1601]: time="2026-01-20T01:45:03.739035124Z" level=warning msg="container event discarded" container=5778948b29b956c82229c5a9571a877bc546bcd28e6ca7ffcdd230c4ff7a0fb3 type=CONTAINER_STARTED_EVENT Jan 20 01:45:03.859106 containerd[1601]: time="2026-01-20T01:45:03.858443153Z" level=warning msg="container event discarded" container=437524d2597bc3fe23fb789634e93ca424eecbcf5081c21a9ba391b7abf6ccd9 type=CONTAINER_STARTED_EVENT Jan 20 01:45:08.517902 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:53504.service - OpenSSH per-connection server daemon (10.0.0.1:53504). Jan 20 01:45:09.125361 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 53504 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:09.136900 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:09.192108 systemd-logind[1586]: New session 10 of user core. Jan 20 01:45:09.201730 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:45:10.332793 sshd[5042]: Connection closed by 10.0.0.1 port 53504 Jan 20 01:45:10.339681 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:10.421073 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:53504.service: Deactivated successfully. Jan 20 01:45:10.442976 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:45:10.466095 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:45:10.536460 systemd-logind[1586]: Removed session 10. Jan 20 01:45:15.436184 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:34780.service - OpenSSH per-connection server daemon (10.0.0.1:34780). Jan 20 01:45:16.068431 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 34780 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:16.099006 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:16.204183 systemd-logind[1586]: New session 11 of user core. Jan 20 01:45:16.289553 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:45:18.341185 sshd[5081]: Connection closed by 10.0.0.1 port 34780 Jan 20 01:45:18.347501 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:18.424583 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:45:18.430097 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:34780.service: Deactivated successfully. Jan 20 01:45:18.478800 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:45:18.530773 systemd-logind[1586]: Removed session 11. Jan 20 01:45:23.454813 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:34792.service - OpenSSH per-connection server daemon (10.0.0.1:34792). Jan 20 01:45:24.460165 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 34792 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:24.471882 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:24.681504 systemd-logind[1586]: New session 12 of user core. Jan 20 01:45:24.763491 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:45:27.898300 sshd[5137]: Connection closed by 10.0.0.1 port 34792 Jan 20 01:45:27.846569 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:28.006513 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:34792.service: Deactivated successfully. Jan 20 01:45:28.078629 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:45:28.173362 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:45:28.239171 systemd-logind[1586]: Removed session 12. Jan 20 01:45:32.929443 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:58576.service - OpenSSH per-connection server daemon (10.0.0.1:58576). Jan 20 01:45:33.711272 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 58576 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:33.712552 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:33.820001 systemd-logind[1586]: New session 13 of user core. Jan 20 01:45:33.844698 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:45:35.168388 sshd[5182]: Connection closed by 10.0.0.1 port 58576 Jan 20 01:45:35.194604 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:35.318825 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:58576.service: Deactivated successfully. Jan 20 01:45:35.345332 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:45:35.363003 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:45:35.412158 systemd-logind[1586]: Removed session 13. Jan 20 01:45:40.370439 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:40982.service - OpenSSH per-connection server daemon (10.0.0.1:40982). Jan 20 01:45:40.861836 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 40982 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:40.870289 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:40.939415 systemd-logind[1586]: New session 14 of user core. Jan 20 01:45:40.971073 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:45:42.428293 sshd[5231]: Connection closed by 10.0.0.1 port 40982 Jan 20 01:45:42.451456 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:42.509950 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:40982.service: Deactivated successfully. Jan 20 01:45:42.554533 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:45:42.585089 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:45:42.607660 systemd-logind[1586]: Removed session 14. Jan 20 01:45:47.511783 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:51444.service - OpenSSH per-connection server daemon (10.0.0.1:51444). Jan 20 01:45:48.056669 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 51444 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:48.058633 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:48.129108 systemd-logind[1586]: New session 15 of user core. Jan 20 01:45:48.168646 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:45:49.134742 sshd[5284]: Connection closed by 10.0.0.1 port 51444 Jan 20 01:45:49.149755 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:49.192437 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:51444.service: Deactivated successfully. Jan 20 01:45:49.238758 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:45:49.263646 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:45:49.300298 systemd-logind[1586]: Removed session 15. Jan 20 01:45:53.613288 kubelet[3013]: E0120 01:45:53.596471 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:45:54.344557 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:51460.service - OpenSSH per-connection server daemon (10.0.0.1:51460). Jan 20 01:45:55.147355 sshd[5318]: Accepted publickey for core from 10.0.0.1 port 51460 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:45:55.146324 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:55.223086 systemd-logind[1586]: New session 16 of user core. Jan 20 01:45:55.249360 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:45:56.350082 sshd[5325]: Connection closed by 10.0.0.1 port 51460 Jan 20 01:45:56.350668 sshd-session[5318]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:56.389499 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:51460.service: Deactivated successfully. Jan 20 01:45:56.404921 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:45:56.414582 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:45:56.433615 systemd-logind[1586]: Removed session 16. Jan 20 01:46:01.421602 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:41174.service - OpenSSH per-connection server daemon (10.0.0.1:41174). Jan 20 01:46:01.881358 sshd[5359]: Accepted publickey for core from 10.0.0.1 port 41174 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:46:01.891890 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:46:01.956965 systemd-logind[1586]: New session 17 of user core. Jan 20 01:46:01.973952 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:46:03.317807 sshd[5363]: Connection closed by 10.0.0.1 port 41174 Jan 20 01:46:03.318861 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Jan 20 01:46:03.389933 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:41174.service: Deactivated successfully. Jan 20 01:46:03.424634 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:46:03.450628 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:46:03.484484 systemd-logind[1586]: Removed session 17. Jan 20 01:46:07.551278 kubelet[3013]: E0120 01:46:07.550966 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:08.452364 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:57732.service - OpenSSH per-connection server daemon (10.0.0.1:57732). Jan 20 01:46:08.897486 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 57732 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:46:08.915801 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:46:08.989515 systemd-logind[1586]: New session 18 of user core. Jan 20 01:46:09.039792 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:46:10.694517 sshd[5401]: Connection closed by 10.0.0.1 port 57732 Jan 20 01:46:10.693638 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Jan 20 01:46:10.739280 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:57732.service: Deactivated successfully. Jan 20 01:46:10.766903 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:46:10.801294 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:46:10.820449 systemd-logind[1586]: Removed session 18. Jan 20 01:46:15.785590 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:48534.service - OpenSSH per-connection server daemon (10.0.0.1:48534). Jan 20 01:46:16.166627 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 48534 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:46:16.210133 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:46:16.264242 systemd-logind[1586]: New session 19 of user core. Jan 20 01:46:16.312343 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:46:17.245948 sshd[5459]: Connection closed by 10.0.0.1 port 48534 Jan 20 01:46:17.246844 sshd-session[5442]: pam_unix(sshd:session): session closed for user core Jan 20 01:46:17.294609 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:48534.service: Deactivated successfully. Jan 20 01:46:17.309555 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:46:17.354363 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:46:17.371960 systemd-logind[1586]: Removed session 19. Jan 20 01:46:18.554880 kubelet[3013]: E0120 01:46:18.546990 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:22.397973 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:48604.service - OpenSSH per-connection server daemon (10.0.0.1:48604). Jan 20 01:46:22.624026 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 48604 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:46:22.644061 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:46:22.688809 systemd-logind[1586]: New session 20 of user core. Jan 20 01:46:22.714871 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:46:39.743659 kubelet[3013]: E0120 01:46:39.742024 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.523s" Jan 20 01:46:39.774468 systemd[1]: cri-containerd-149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170.scope: Deactivated successfully. Jan 20 01:46:39.787497 containerd[1601]: time="2026-01-20T01:46:39.774849506Z" level=info msg="received container exit event container_id:\"149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170\" id:\"149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170\" pid:3114 exit_status:1 exited_at:{seconds:1768873599 nanos:773780668}" Jan 20 01:46:39.793464 systemd[1]: cri-containerd-149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170.scope: Consumed 27.015s CPU time, 52.7M memory peak. Jan 20 01:46:39.854692 kubelet[3013]: E0120 01:46:39.854489 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:40.249188 kubelet[3013]: E0120 01:46:40.237578 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:41.298447 sshd[5497]: Connection closed by 10.0.0.1 port 48604 Jan 20 01:46:42.406090 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Jan 20 01:46:42.746728 systemd[1]: cri-containerd-61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada.scope: Deactivated successfully. Jan 20 01:46:45.810692 systemd[1]: cri-containerd-61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada.scope: Consumed 17.592s CPU time, 24.5M memory peak. Jan 20 01:46:52.074093 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:40984.service - OpenSSH per-connection server daemon (10.0.0.1:40984). Jan 20 01:46:52.103852 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:48604.service: Deactivated successfully. Jan 20 01:46:52.152049 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:46:52.242738 containerd[1601]: time="2026-01-20T01:46:52.240418806Z" level=info msg="received container exit event container_id:\"61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada\" id:\"61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada\" pid:3121 exit_status:1 exited_at:{seconds:1768873611 nanos:918068408}" Jan 20 01:46:52.302470 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:46:52.388948 systemd-logind[1586]: Removed session 20. Jan 20 01:46:52.441415 containerd[1601]: time="2026-01-20T01:46:52.426349215Z" level=error msg="failed to handle container TaskExit event container_id:\"149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170\" id:\"149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170\" pid:3114 exit_status:1 exited_at:{seconds:1768873599 nanos:773780668}" error="failed to stop container: unknown error after kill: runc did not terminate successfully: exit status 137: " Jan 20 01:46:52.662287 kubelet[3013]: E0120 01:46:52.662154 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.705s" Jan 20 01:46:53.118661 kubelet[3013]: E0120 01:46:53.111160 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:53.186763 kubelet[3013]: E0120 01:46:53.174469 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:53.711648 containerd[1601]: time="2026-01-20T01:46:53.709956120Z" level=info msg="TaskExit event container_id:\"149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170\" id:\"149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170\" pid:3114 exit_status:1 exited_at:{seconds:1768873599 nanos:773780668}" Jan 20 01:46:53.813086 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 40984 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:46:53.879086 sshd-session[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:46:54.159813 systemd-logind[1586]: New session 21 of user core. Jan 20 01:46:54.236691 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:46:54.903068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada-rootfs.mount: Deactivated successfully. Jan 20 01:46:55.023103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170-rootfs.mount: Deactivated successfully. Jan 20 01:46:56.166601 sshd[5570]: Connection closed by 10.0.0.1 port 40984 Jan 20 01:46:56.194521 kubelet[3013]: I0120 01:46:56.191069 3013 scope.go:117] "RemoveContainer" containerID="b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240" Jan 20 01:46:56.194521 kubelet[3013]: I0120 01:46:56.191877 3013 scope.go:117] "RemoveContainer" containerID="61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada" Jan 20 01:46:56.194521 kubelet[3013]: E0120 01:46:56.191980 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:56.194521 kubelet[3013]: E0120 01:46:56.192126 3013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 20 01:46:56.237338 containerd[1601]: time="2026-01-20T01:46:56.230425501Z" level=info msg="RemoveContainer for \"b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240\"" Jan 20 01:46:56.230686 sshd-session[5532]: pam_unix(sshd:session): session closed for user core Jan 20 01:46:56.346627 kubelet[3013]: I0120 01:46:56.340933 3013 scope.go:117] "RemoveContainer" containerID="149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170" Jan 20 01:46:56.346627 kubelet[3013]: E0120 01:46:56.341065 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:56.365985 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:40984.service: Deactivated successfully. Jan 20 01:46:56.433988 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:46:56.446092 containerd[1601]: time="2026-01-20T01:46:56.443013938Z" level=info msg="CreateContainer within sandbox \"d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jan 20 01:46:56.489681 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:46:56.517422 containerd[1601]: time="2026-01-20T01:46:56.503677148Z" level=info msg="RemoveContainer for \"b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240\" returns successfully" Jan 20 01:46:56.518106 kubelet[3013]: I0120 01:46:56.518065 3013 scope.go:117] "RemoveContainer" containerID="0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d" Jan 20 01:46:56.542359 systemd-logind[1586]: Removed session 21. Jan 20 01:46:56.629363 containerd[1601]: time="2026-01-20T01:46:56.624277910Z" level=info msg="RemoveContainer for \"0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d\"" Jan 20 01:46:56.768799 containerd[1601]: time="2026-01-20T01:46:56.767107232Z" level=info msg="RemoveContainer for \"0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d\" returns successfully" Jan 20 01:46:56.923367 containerd[1601]: time="2026-01-20T01:46:56.919022308Z" level=info msg="Container bc4a612bb6bde8ec4faf2d866c21d12387a74b84db707c84ce76b379753305dc: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:46:57.051741 containerd[1601]: time="2026-01-20T01:46:57.041913191Z" level=info msg="CreateContainer within sandbox \"d3252c085149513be1d7129fa6f8640964306d297e6d8e93ce22005213810387\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"bc4a612bb6bde8ec4faf2d866c21d12387a74b84db707c84ce76b379753305dc\"" Jan 20 01:46:57.085556 containerd[1601]: time="2026-01-20T01:46:57.066858799Z" level=info msg="StartContainer for \"bc4a612bb6bde8ec4faf2d866c21d12387a74b84db707c84ce76b379753305dc\"" Jan 20 01:46:57.085556 containerd[1601]: time="2026-01-20T01:46:57.084829273Z" level=info msg="connecting to shim bc4a612bb6bde8ec4faf2d866c21d12387a74b84db707c84ce76b379753305dc" address="unix:///run/containerd/s/6e91a995477f8793a4c1b3ef2e8a5c979bc5b41b26937d30bb37b0b4bc643b41" protocol=ttrpc version=3 Jan 20 01:46:57.521590 systemd[1]: Started cri-containerd-bc4a612bb6bde8ec4faf2d866c21d12387a74b84db707c84ce76b379753305dc.scope - libcontainer container bc4a612bb6bde8ec4faf2d866c21d12387a74b84db707c84ce76b379753305dc. Jan 20 01:46:59.036803 kubelet[3013]: E0120 01:46:59.030534 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:59.406669 kubelet[3013]: I0120 01:46:59.401459 3013 scope.go:117] "RemoveContainer" containerID="61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada" Jan 20 01:46:59.406669 kubelet[3013]: E0120 01:46:59.401586 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:46:59.406669 kubelet[3013]: E0120 01:46:59.401701 3013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 20 01:46:59.617950 containerd[1601]: time="2026-01-20T01:46:59.617820262Z" level=info msg="StartContainer for \"bc4a612bb6bde8ec4faf2d866c21d12387a74b84db707c84ce76b379753305dc\" returns successfully" Jan 20 01:47:00.108617 kubelet[3013]: E0120 01:47:00.107945 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:01.301630 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:52620.service - OpenSSH per-connection server daemon (10.0.0.1:52620). Jan 20 01:47:16.766952 kubelet[3013]: E0120 01:47:16.716882 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.715s" Jan 20 01:47:16.766952 kubelet[3013]: I0120 01:47:16.728339 3013 scope.go:117] "RemoveContainer" containerID="61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada" Jan 20 01:47:16.766952 kubelet[3013]: E0120 01:47:16.728521 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:16.866795 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 52620 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:47:16.805501 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:47:16.909601 kubelet[3013]: E0120 01:47:16.855570 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:17.045654 systemd-logind[1586]: New session 22 of user core. Jan 20 01:47:17.336163 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:47:17.633110 containerd[1601]: time="2026-01-20T01:47:17.630829007Z" level=info msg="CreateContainer within sandbox \"41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jan 20 01:47:18.318634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1470563200.mount: Deactivated successfully. Jan 20 01:47:18.327278 containerd[1601]: time="2026-01-20T01:47:18.324800146Z" level=info msg="Container b42ec3fc362f55179a7803ca896b179518967bf26b848561a825681fdf8c5d3a: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:47:18.403373 containerd[1601]: time="2026-01-20T01:47:18.392612014Z" level=info msg="CreateContainer within sandbox \"41d0b2bcce781f217030b432c9c83f7b7226d7e5e7227e789236381105bca8d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"b42ec3fc362f55179a7803ca896b179518967bf26b848561a825681fdf8c5d3a\"" Jan 20 01:47:18.405502 containerd[1601]: time="2026-01-20T01:47:18.404936300Z" level=info msg="StartContainer for \"b42ec3fc362f55179a7803ca896b179518967bf26b848561a825681fdf8c5d3a\"" Jan 20 01:47:18.418791 containerd[1601]: time="2026-01-20T01:47:18.418388596Z" level=info msg="connecting to shim b42ec3fc362f55179a7803ca896b179518967bf26b848561a825681fdf8c5d3a" address="unix:///run/containerd/s/9129f3959f5e9283d6a67cf2cc4ab3904fcb2a03c6dab64ed6b13386e87b76e3" protocol=ttrpc version=3 Jan 20 01:47:18.614695 systemd[1]: Started cri-containerd-b42ec3fc362f55179a7803ca896b179518967bf26b848561a825681fdf8c5d3a.scope - libcontainer container b42ec3fc362f55179a7803ca896b179518967bf26b848561a825681fdf8c5d3a. Jan 20 01:47:19.837142 sshd[5665]: Connection closed by 10.0.0.1 port 52620 Jan 20 01:47:19.838489 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Jan 20 01:47:19.868584 containerd[1601]: time="2026-01-20T01:47:19.867759323Z" level=info msg="StartContainer for \"b42ec3fc362f55179a7803ca896b179518967bf26b848561a825681fdf8c5d3a\" returns successfully" Jan 20 01:47:19.956684 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:52620.service: Deactivated successfully. Jan 20 01:47:20.002121 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:47:20.020428 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:47:20.059445 systemd-logind[1586]: Removed session 22. Jan 20 01:47:20.169661 kubelet[3013]: E0120 01:47:20.165393 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:21.179587 kubelet[3013]: E0120 01:47:21.165614 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:23.815050 kubelet[3013]: E0120 01:47:23.811756 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:25.012638 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:45354.service - OpenSSH per-connection server daemon (10.0.0.1:45354). Jan 20 01:47:25.652264 sshd[5744]: Accepted publickey for core from 10.0.0.1 port 45354 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:47:25.701690 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:47:25.834568 systemd-logind[1586]: New session 23 of user core. Jan 20 01:47:25.909706 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:47:27.583941 sshd[5747]: Connection closed by 10.0.0.1 port 45354 Jan 20 01:47:27.599409 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Jan 20 01:47:27.707781 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:45354.service: Deactivated successfully. Jan 20 01:47:27.762654 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:47:27.809930 systemd-logind[1586]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:47:27.838416 systemd-logind[1586]: Removed session 23. Jan 20 01:47:32.729447 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:45364.service - OpenSSH per-connection server daemon (10.0.0.1:45364). Jan 20 01:47:40.294698 kubelet[3013]: E0120 01:47:40.289329 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.879s" Jan 20 01:47:40.326179 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 45364 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:47:40.331564 sshd-session[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:47:40.340061 kubelet[3013]: E0120 01:47:40.336719 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:40.340951 kubelet[3013]: E0120 01:47:40.340882 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:40.373860 systemd-logind[1586]: New session 24 of user core. Jan 20 01:47:40.859824 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:47:41.261379 kubelet[3013]: E0120 01:47:41.261105 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:45.001415 kubelet[3013]: E0120 01:47:44.999028 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:47:45.137274 sshd[5807]: Connection closed by 10.0.0.1 port 45364 Jan 20 01:47:45.146923 sshd-session[5787]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:05.025442 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:45364.service: Deactivated successfully. Jan 20 01:48:05.093909 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:48:05.251152 systemd-logind[1586]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:48:06.318508 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:44114.service - OpenSSH per-connection server daemon (10.0.0.1:44114). Jan 20 01:48:06.481596 systemd-logind[1586]: Removed session 24. Jan 20 01:48:07.191163 kubelet[3013]: E0120 01:48:07.162825 3013 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.5s" Jan 20 01:48:07.798474 kubelet[3013]: E0120 01:48:07.796063 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:07.816507 kubelet[3013]: E0120 01:48:07.809774 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:07.940854 sshd[5831]: Accepted publickey for core from 10.0.0.1 port 44114 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:48:08.013602 sshd-session[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:08.295161 systemd-logind[1586]: New session 25 of user core. Jan 20 01:48:08.311598 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:48:10.158369 sshd[5851]: Connection closed by 10.0.0.1 port 44114 Jan 20 01:48:10.157111 sshd-session[5831]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:10.238881 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:44114.service: Deactivated successfully. Jan 20 01:48:10.257957 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:48:10.323917 systemd-logind[1586]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:48:10.332968 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:59656.service - OpenSSH per-connection server daemon (10.0.0.1:59656). Jan 20 01:48:10.401634 systemd-logind[1586]: Removed session 25. Jan 20 01:48:10.567188 kubelet[3013]: E0120 01:48:10.564932 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:10.996987 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 59656 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:48:11.006994 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:11.116657 systemd-logind[1586]: New session 26 of user core. Jan 20 01:48:11.182850 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:48:11.546876 kubelet[3013]: E0120 01:48:11.546189 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:13.922599 sshd[5866]: Connection closed by 10.0.0.1 port 59656 Jan 20 01:48:13.943723 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:14.308442 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:59656.service: Deactivated successfully. Jan 20 01:48:14.341554 systemd-logind[1586]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:48:14.409503 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:48:14.447901 systemd-logind[1586]: Removed session 26. Jan 20 01:48:19.004177 systemd[1]: Started sshd@26-10.0.0.36:22-10.0.0.1:55168.service - OpenSSH per-connection server daemon (10.0.0.1:55168). Jan 20 01:48:19.522670 sshd[5905]: Accepted publickey for core from 10.0.0.1 port 55168 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:48:19.521820 sshd-session[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:19.605900 systemd-logind[1586]: New session 27 of user core. Jan 20 01:48:19.722189 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:48:20.429452 sshd[5914]: Connection closed by 10.0.0.1 port 55168 Jan 20 01:48:20.430532 sshd-session[5905]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:20.452003 systemd[1]: sshd@26-10.0.0.36:22-10.0.0.1:55168.service: Deactivated successfully. Jan 20 01:48:20.467862 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:48:20.493425 systemd-logind[1586]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:48:20.510293 systemd-logind[1586]: Removed session 27. Jan 20 01:48:25.495395 systemd[1]: Started sshd@27-10.0.0.36:22-10.0.0.1:37868.service - OpenSSH per-connection server daemon (10.0.0.1:37868). Jan 20 01:48:30.831515 sshd[5956]: Accepted publickey for core from 10.0.0.1 port 37868 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:48:30.870810 sshd-session[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:31.012541 systemd-logind[1586]: New session 28 of user core. Jan 20 01:48:31.049316 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 01:48:32.152592 sshd[5979]: Connection closed by 10.0.0.1 port 37868 Jan 20 01:48:32.163814 sshd-session[5956]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:32.248388 systemd[1]: sshd@27-10.0.0.36:22-10.0.0.1:37868.service: Deactivated successfully. Jan 20 01:48:32.306034 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 01:48:32.338981 systemd-logind[1586]: Session 28 logged out. Waiting for processes to exit. Jan 20 01:48:32.386403 systemd-logind[1586]: Removed session 28. Jan 20 01:48:37.246540 systemd[1]: Started sshd@28-10.0.0.36:22-10.0.0.1:36554.service - OpenSSH per-connection server daemon (10.0.0.1:36554). Jan 20 01:48:37.717429 sshd[6016]: Accepted publickey for core from 10.0.0.1 port 36554 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:48:37.740512 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:37.788787 systemd-logind[1586]: New session 29 of user core. Jan 20 01:48:37.806921 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 01:48:38.798449 sshd[6019]: Connection closed by 10.0.0.1 port 36554 Jan 20 01:48:38.800853 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:38.819643 systemd[1]: sshd@28-10.0.0.36:22-10.0.0.1:36554.service: Deactivated successfully. Jan 20 01:48:38.832944 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 01:48:38.852524 systemd-logind[1586]: Session 29 logged out. Waiting for processes to exit. Jan 20 01:48:38.885385 systemd-logind[1586]: Removed session 29. Jan 20 01:48:43.906141 systemd[1]: Started sshd@29-10.0.0.36:22-10.0.0.1:36568.service - OpenSSH per-connection server daemon (10.0.0.1:36568). Jan 20 01:48:44.263910 sshd[6052]: Accepted publickey for core from 10.0.0.1 port 36568 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:48:44.284848 sshd-session[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:44.396486 systemd-logind[1586]: New session 30 of user core. Jan 20 01:48:44.539564 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 01:48:45.707882 sshd[6055]: Connection closed by 10.0.0.1 port 36568 Jan 20 01:48:45.714704 sshd-session[6052]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:45.734777 systemd[1]: sshd@29-10.0.0.36:22-10.0.0.1:36568.service: Deactivated successfully. Jan 20 01:48:45.751272 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 01:48:45.766342 systemd-logind[1586]: Session 30 logged out. Waiting for processes to exit. Jan 20 01:48:45.777378 systemd-logind[1586]: Removed session 30. Jan 20 01:48:48.547103 kubelet[3013]: E0120 01:48:48.542795 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:50.825439 systemd[1]: Started sshd@30-10.0.0.36:22-10.0.0.1:58578.service - OpenSSH per-connection server daemon (10.0.0.1:58578). Jan 20 01:48:51.270879 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 58578 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:48:51.300062 sshd-session[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:51.376894 systemd-logind[1586]: New session 31 of user core. Jan 20 01:48:51.393646 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 01:48:52.135694 sshd[6098]: Connection closed by 10.0.0.1 port 58578 Jan 20 01:48:52.138378 sshd-session[6089]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:52.152550 systemd-logind[1586]: Session 31 logged out. Waiting for processes to exit. Jan 20 01:48:52.152676 systemd[1]: sshd@30-10.0.0.36:22-10.0.0.1:58578.service: Deactivated successfully. Jan 20 01:48:52.172420 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 01:48:52.179681 systemd-logind[1586]: Removed session 31. Jan 20 01:48:53.099576 kubelet[3013]: E0120 01:48:53.099079 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:48:57.924628 systemd[1]: Started sshd@31-10.0.0.36:22-10.0.0.1:59812.service - OpenSSH per-connection server daemon (10.0.0.1:59812). Jan 20 01:48:58.849749 sshd[6130]: Accepted publickey for core from 10.0.0.1 port 59812 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:48:58.862379 sshd-session[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:58.917509 systemd-logind[1586]: New session 32 of user core. Jan 20 01:48:58.989589 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 01:49:01.608256 kubelet[3013]: E0120 01:49:01.607175 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:01.930640 sshd[6149]: Connection closed by 10.0.0.1 port 59812 Jan 20 01:49:01.931727 sshd-session[6130]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:01.953779 systemd[1]: sshd@31-10.0.0.36:22-10.0.0.1:59812.service: Deactivated successfully. Jan 20 01:49:01.965819 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 01:49:01.994146 systemd-logind[1586]: Session 32 logged out. Waiting for processes to exit. Jan 20 01:49:02.019940 systemd-logind[1586]: Removed session 32. Jan 20 01:49:07.019331 systemd[1]: Started sshd@32-10.0.0.36:22-10.0.0.1:51850.service - OpenSSH per-connection server daemon (10.0.0.1:51850). Jan 20 01:49:07.541264 sshd[6186]: Accepted publickey for core from 10.0.0.1 port 51850 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:49:07.562546 sshd-session[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:07.636417 systemd-logind[1586]: New session 33 of user core. Jan 20 01:49:07.683170 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 01:49:08.624342 sshd[6189]: Connection closed by 10.0.0.1 port 51850 Jan 20 01:49:08.623516 sshd-session[6186]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:08.660512 systemd[1]: sshd@32-10.0.0.36:22-10.0.0.1:51850.service: Deactivated successfully. Jan 20 01:49:08.701519 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 01:49:08.726175 systemd-logind[1586]: Session 33 logged out. Waiting for processes to exit. Jan 20 01:49:08.746182 systemd-logind[1586]: Removed session 33. Jan 20 01:49:13.797557 systemd[1]: Started sshd@33-10.0.0.36:22-10.0.0.1:51858.service - OpenSSH per-connection server daemon (10.0.0.1:51858). Jan 20 01:49:14.552156 kubelet[3013]: E0120 01:49:14.549134 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:14.726188 sshd[6228]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:49:14.730043 sshd-session[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:14.819917 systemd-logind[1586]: New session 34 of user core. Jan 20 01:49:14.892341 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 01:49:15.923749 sshd[6248]: Connection closed by 10.0.0.1 port 51858 Jan 20 01:49:15.925075 sshd-session[6228]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:15.998109 systemd[1]: sshd@33-10.0.0.36:22-10.0.0.1:51858.service: Deactivated successfully. Jan 20 01:49:16.030457 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 01:49:16.088340 systemd-logind[1586]: Session 34 logged out. Waiting for processes to exit. Jan 20 01:49:16.120935 systemd-logind[1586]: Removed session 34. Jan 20 01:49:21.013888 systemd[1]: Started sshd@34-10.0.0.36:22-10.0.0.1:34612.service - OpenSSH per-connection server daemon (10.0.0.1:34612). Jan 20 01:49:21.590164 sshd[6281]: Accepted publickey for core from 10.0.0.1 port 34612 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:49:21.588155 sshd-session[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:21.671921 systemd-logind[1586]: New session 35 of user core. Jan 20 01:49:21.722804 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 01:49:22.555470 kubelet[3013]: E0120 01:49:22.542887 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:23.166240 sshd[6284]: Connection closed by 10.0.0.1 port 34612 Jan 20 01:49:23.179526 sshd-session[6281]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:23.246654 systemd[1]: sshd@34-10.0.0.36:22-10.0.0.1:34612.service: Deactivated successfully. Jan 20 01:49:23.282933 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 01:49:23.311517 systemd-logind[1586]: Session 35 logged out. Waiting for processes to exit. Jan 20 01:49:23.350523 systemd-logind[1586]: Removed session 35. Jan 20 01:49:28.267964 systemd[1]: Started sshd@35-10.0.0.36:22-10.0.0.1:56324.service - OpenSSH per-connection server daemon (10.0.0.1:56324). Jan 20 01:49:28.667069 sshd[6317]: Accepted publickey for core from 10.0.0.1 port 56324 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:49:28.699760 sshd-session[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:28.759356 systemd-logind[1586]: New session 36 of user core. Jan 20 01:49:28.770708 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 01:49:30.016031 sshd[6320]: Connection closed by 10.0.0.1 port 56324 Jan 20 01:49:30.022500 sshd-session[6317]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:30.079449 systemd[1]: sshd@35-10.0.0.36:22-10.0.0.1:56324.service: Deactivated successfully. Jan 20 01:49:30.107696 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 01:49:30.121443 systemd-logind[1586]: Session 36 logged out. Waiting for processes to exit. Jan 20 01:49:30.136989 systemd-logind[1586]: Removed session 36. Jan 20 01:49:33.551017 kubelet[3013]: E0120 01:49:33.549024 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:35.155689 systemd[1]: Started sshd@36-10.0.0.36:22-10.0.0.1:52738.service - OpenSSH per-connection server daemon (10.0.0.1:52738). Jan 20 01:49:35.910301 sshd[6364]: Accepted publickey for core from 10.0.0.1 port 52738 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:49:35.936442 sshd-session[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:36.037342 systemd-logind[1586]: New session 37 of user core. Jan 20 01:49:36.071352 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 01:49:37.133645 sshd[6370]: Connection closed by 10.0.0.1 port 52738 Jan 20 01:49:37.135782 sshd-session[6364]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:37.171700 systemd[1]: sshd@36-10.0.0.36:22-10.0.0.1:52738.service: Deactivated successfully. Jan 20 01:49:37.203449 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 01:49:37.237426 systemd-logind[1586]: Session 37 logged out. Waiting for processes to exit. Jan 20 01:49:37.267464 systemd-logind[1586]: Removed session 37. Jan 20 01:49:38.556746 kubelet[3013]: E0120 01:49:38.550471 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:49:42.239150 systemd[1]: Started sshd@37-10.0.0.36:22-10.0.0.1:52742.service - OpenSSH per-connection server daemon (10.0.0.1:52742). Jan 20 01:49:42.840338 sshd[6415]: Accepted publickey for core from 10.0.0.1 port 52742 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:49:42.873012 sshd-session[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:42.992100 systemd-logind[1586]: New session 38 of user core. Jan 20 01:49:43.038911 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 01:49:44.295502 sshd[6418]: Connection closed by 10.0.0.1 port 52742 Jan 20 01:49:44.296023 sshd-session[6415]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:44.326351 systemd-logind[1586]: Session 38 logged out. Waiting for processes to exit. Jan 20 01:49:44.326543 systemd[1]: sshd@37-10.0.0.36:22-10.0.0.1:52742.service: Deactivated successfully. Jan 20 01:49:44.354681 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 01:49:44.381120 systemd-logind[1586]: Removed session 38. Jan 20 01:49:49.383868 systemd[1]: Started sshd@38-10.0.0.36:22-10.0.0.1:35790.service - OpenSSH per-connection server daemon (10.0.0.1:35790). Jan 20 01:49:49.971385 sshd[6453]: Accepted publickey for core from 10.0.0.1 port 35790 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:49:49.992662 sshd-session[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:50.093761 systemd-logind[1586]: New session 39 of user core. Jan 20 01:49:50.130556 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 01:49:51.414730 sshd[6462]: Connection closed by 10.0.0.1 port 35790 Jan 20 01:49:51.421574 sshd-session[6453]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:51.458171 systemd-logind[1586]: Session 39 logged out. Waiting for processes to exit. Jan 20 01:49:51.474946 systemd[1]: sshd@38-10.0.0.36:22-10.0.0.1:35790.service: Deactivated successfully. Jan 20 01:49:51.495060 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 01:49:51.527117 systemd-logind[1586]: Removed session 39. Jan 20 01:49:56.536405 systemd[1]: Started sshd@39-10.0.0.36:22-10.0.0.1:46210.service - OpenSSH per-connection server daemon (10.0.0.1:46210). Jan 20 01:49:56.857985 sshd[6496]: Accepted publickey for core from 10.0.0.1 port 46210 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:49:56.865664 sshd-session[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:56.938406 systemd-logind[1586]: New session 40 of user core. Jan 20 01:49:56.953835 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 01:49:58.027012 sshd[6499]: Connection closed by 10.0.0.1 port 46210 Jan 20 01:49:58.025569 sshd-session[6496]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:58.123356 systemd[1]: sshd@39-10.0.0.36:22-10.0.0.1:46210.service: Deactivated successfully. Jan 20 01:49:58.162154 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 01:49:58.187646 systemd-logind[1586]: Session 40 logged out. Waiting for processes to exit. Jan 20 01:49:58.246430 systemd[1]: Started sshd@40-10.0.0.36:22-10.0.0.1:46218.service - OpenSSH per-connection server daemon (10.0.0.1:46218). Jan 20 01:49:58.267839 systemd-logind[1586]: Removed session 40. Jan 20 01:49:58.690778 sshd[6526]: Accepted publickey for core from 10.0.0.1 port 46218 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:49:58.704754 sshd-session[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:58.789428 systemd-logind[1586]: New session 41 of user core. Jan 20 01:49:58.809488 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 01:49:59.555884 kubelet[3013]: E0120 01:49:59.555746 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:01.748898 sshd[6529]: Connection closed by 10.0.0.1 port 46218 Jan 20 01:50:01.757100 sshd-session[6526]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:01.825951 systemd[1]: sshd@40-10.0.0.36:22-10.0.0.1:46218.service: Deactivated successfully. Jan 20 01:50:01.856426 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 01:50:01.900938 systemd-logind[1586]: Session 41 logged out. Waiting for processes to exit. Jan 20 01:50:01.925121 systemd[1]: Started sshd@41-10.0.0.36:22-10.0.0.1:46224.service - OpenSSH per-connection server daemon (10.0.0.1:46224). Jan 20 01:50:01.956747 systemd-logind[1586]: Removed session 41. Jan 20 01:50:02.390170 sshd[6547]: Accepted publickey for core from 10.0.0.1 port 46224 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:02.380516 sshd-session[6547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:02.455945 systemd-logind[1586]: New session 42 of user core. Jan 20 01:50:02.507626 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 01:50:07.662755 sshd[6552]: Connection closed by 10.0.0.1 port 46224 Jan 20 01:50:07.664642 sshd-session[6547]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:07.750972 systemd[1]: sshd@41-10.0.0.36:22-10.0.0.1:46224.service: Deactivated successfully. Jan 20 01:50:07.790590 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 01:50:07.791047 systemd[1]: session-42.scope: Consumed 1.209s CPU time, 37.1M memory peak. Jan 20 01:50:07.817782 systemd-logind[1586]: Session 42 logged out. Waiting for processes to exit. Jan 20 01:50:07.842510 systemd[1]: Started sshd@42-10.0.0.36:22-10.0.0.1:52570.service - OpenSSH per-connection server daemon (10.0.0.1:52570). Jan 20 01:50:07.860584 systemd-logind[1586]: Removed session 42. Jan 20 01:50:08.304838 sshd[6593]: Accepted publickey for core from 10.0.0.1 port 52570 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:08.308037 sshd-session[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:08.370798 systemd-logind[1586]: New session 43 of user core. Jan 20 01:50:08.420174 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 01:50:11.284658 sshd[6599]: Connection closed by 10.0.0.1 port 52570 Jan 20 01:50:11.275155 sshd-session[6593]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:11.343905 systemd[1]: sshd@42-10.0.0.36:22-10.0.0.1:52570.service: Deactivated successfully. Jan 20 01:50:11.361644 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 01:50:11.390607 systemd-logind[1586]: Session 43 logged out. Waiting for processes to exit. Jan 20 01:50:11.411164 systemd-logind[1586]: Removed session 43. Jan 20 01:50:11.427788 systemd[1]: Started sshd@43-10.0.0.36:22-10.0.0.1:52586.service - OpenSSH per-connection server daemon (10.0.0.1:52586). Jan 20 01:50:11.902510 sshd[6629]: Accepted publickey for core from 10.0.0.1 port 52586 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:11.908819 sshd-session[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:11.961417 systemd-logind[1586]: New session 44 of user core. Jan 20 01:50:12.019966 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 01:50:12.628782 sshd[6632]: Connection closed by 10.0.0.1 port 52586 Jan 20 01:50:12.630701 sshd-session[6629]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:12.664826 systemd[1]: sshd@43-10.0.0.36:22-10.0.0.1:52586.service: Deactivated successfully. Jan 20 01:50:12.704890 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 01:50:12.716299 systemd-logind[1586]: Session 44 logged out. Waiting for processes to exit. Jan 20 01:50:12.729890 systemd-logind[1586]: Removed session 44. Jan 20 01:50:16.550735 kubelet[3013]: E0120 01:50:16.547875 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:16.550735 kubelet[3013]: E0120 01:50:16.549898 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:17.759844 systemd[1]: Started sshd@44-10.0.0.36:22-10.0.0.1:37848.service - OpenSSH per-connection server daemon (10.0.0.1:37848). Jan 20 01:50:18.287998 sshd[6665]: Accepted publickey for core from 10.0.0.1 port 37848 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:18.293946 sshd-session[6665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:18.348181 systemd-logind[1586]: New session 45 of user core. Jan 20 01:50:18.409863 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 01:50:19.296482 sshd[6668]: Connection closed by 10.0.0.1 port 37848 Jan 20 01:50:19.292363 sshd-session[6665]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:19.316883 systemd[1]: sshd@44-10.0.0.36:22-10.0.0.1:37848.service: Deactivated successfully. Jan 20 01:50:19.345797 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 01:50:19.366363 systemd-logind[1586]: Session 45 logged out. Waiting for processes to exit. Jan 20 01:50:19.390176 systemd-logind[1586]: Removed session 45. Jan 20 01:50:24.401494 systemd[1]: Started sshd@45-10.0.0.36:22-10.0.0.1:37852.service - OpenSSH per-connection server daemon (10.0.0.1:37852). Jan 20 01:50:24.819263 sshd[6701]: Accepted publickey for core from 10.0.0.1 port 37852 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:24.846056 sshd-session[6701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:24.892106 systemd-logind[1586]: New session 46 of user core. Jan 20 01:50:24.935694 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 01:50:25.364911 sshd[6718]: Connection closed by 10.0.0.1 port 37852 Jan 20 01:50:25.366088 sshd-session[6701]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:25.371991 systemd[1]: sshd@45-10.0.0.36:22-10.0.0.1:37852.service: Deactivated successfully. Jan 20 01:50:25.377332 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 01:50:25.388733 systemd-logind[1586]: Session 46 logged out. Waiting for processes to exit. Jan 20 01:50:25.392399 systemd-logind[1586]: Removed session 46. Jan 20 01:50:29.566071 kubelet[3013]: E0120 01:50:29.549735 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:30.463349 systemd[1]: Started sshd@46-10.0.0.36:22-10.0.0.1:41034.service - OpenSSH per-connection server daemon (10.0.0.1:41034). Jan 20 01:50:31.070653 sshd[6752]: Accepted publickey for core from 10.0.0.1 port 41034 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:31.111732 sshd-session[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:31.187348 systemd-logind[1586]: New session 47 of user core. Jan 20 01:50:31.244970 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 01:50:32.247825 sshd[6761]: Connection closed by 10.0.0.1 port 41034 Jan 20 01:50:32.239842 sshd-session[6752]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:32.326046 systemd[1]: sshd@46-10.0.0.36:22-10.0.0.1:41034.service: Deactivated successfully. Jan 20 01:50:32.389099 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 01:50:32.421745 systemd-logind[1586]: Session 47 logged out. Waiting for processes to exit. Jan 20 01:50:32.438675 systemd-logind[1586]: Removed session 47. Jan 20 01:50:37.296668 systemd[1]: Started sshd@47-10.0.0.36:22-10.0.0.1:36462.service - OpenSSH per-connection server daemon (10.0.0.1:36462). Jan 20 01:50:37.653461 sshd[6799]: Accepted publickey for core from 10.0.0.1 port 36462 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:37.693755 sshd-session[6799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:37.763342 systemd-logind[1586]: New session 48 of user core. Jan 20 01:50:37.814727 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 01:50:38.549054 kubelet[3013]: E0120 01:50:38.544495 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:38.917366 sshd[6802]: Connection closed by 10.0.0.1 port 36462 Jan 20 01:50:38.918266 sshd-session[6799]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:38.954454 systemd[1]: sshd@47-10.0.0.36:22-10.0.0.1:36462.service: Deactivated successfully. Jan 20 01:50:38.956787 systemd-logind[1586]: Session 48 logged out. Waiting for processes to exit. Jan 20 01:50:38.974098 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 01:50:39.014068 systemd-logind[1586]: Removed session 48. Jan 20 01:50:44.000423 systemd[1]: Started sshd@48-10.0.0.36:22-10.0.0.1:36468.service - OpenSSH per-connection server daemon (10.0.0.1:36468). Jan 20 01:50:44.563092 sshd[6835]: Accepted publickey for core from 10.0.0.1 port 36468 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:44.604903 sshd-session[6835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:44.678989 systemd-logind[1586]: New session 49 of user core. Jan 20 01:50:44.702442 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 01:50:45.575306 kubelet[3013]: E0120 01:50:45.575166 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:45.652895 sshd[6838]: Connection closed by 10.0.0.1 port 36468 Jan 20 01:50:45.653029 sshd-session[6835]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:45.671562 systemd[1]: sshd@48-10.0.0.36:22-10.0.0.1:36468.service: Deactivated successfully. Jan 20 01:50:45.683874 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 01:50:45.717582 systemd-logind[1586]: Session 49 logged out. Waiting for processes to exit. Jan 20 01:50:45.746003 systemd-logind[1586]: Removed session 49. Jan 20 01:50:50.764902 systemd[1]: Started sshd@49-10.0.0.36:22-10.0.0.1:54724.service - OpenSSH per-connection server daemon (10.0.0.1:54724). Jan 20 01:50:51.203641 sshd[6872]: Accepted publickey for core from 10.0.0.1 port 54724 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:51.218279 sshd-session[6872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:51.282064 systemd-logind[1586]: New session 50 of user core. Jan 20 01:50:51.324675 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 01:50:52.469033 sshd[6889]: Connection closed by 10.0.0.1 port 54724 Jan 20 01:50:52.486327 sshd-session[6872]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:52.544061 systemd[1]: sshd@49-10.0.0.36:22-10.0.0.1:54724.service: Deactivated successfully. Jan 20 01:50:52.566378 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 01:50:52.604640 systemd-logind[1586]: Session 50 logged out. Waiting for processes to exit. Jan 20 01:50:52.650255 systemd-logind[1586]: Removed session 50. Jan 20 01:50:57.535340 systemd[1]: Started sshd@50-10.0.0.36:22-10.0.0.1:59178.service - OpenSSH per-connection server daemon (10.0.0.1:59178). Jan 20 01:50:57.904416 sshd[6929]: Accepted publickey for core from 10.0.0.1 port 59178 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:50:57.907528 sshd-session[6929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:50:57.967791 systemd-logind[1586]: New session 51 of user core. Jan 20 01:50:57.973734 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 01:50:58.559320 kubelet[3013]: E0120 01:50:58.543084 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:50:58.660326 sshd[6932]: Connection closed by 10.0.0.1 port 59178 Jan 20 01:50:58.672115 sshd-session[6929]: pam_unix(sshd:session): session closed for user core Jan 20 01:50:58.685757 systemd[1]: sshd@50-10.0.0.36:22-10.0.0.1:59178.service: Deactivated successfully. Jan 20 01:50:58.701586 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 01:50:58.759802 systemd-logind[1586]: Session 51 logged out. Waiting for processes to exit. Jan 20 01:50:58.762390 systemd-logind[1586]: Removed session 51. Jan 20 01:51:03.750784 systemd[1]: Started sshd@51-10.0.0.36:22-10.0.0.1:59180.service - OpenSSH per-connection server daemon (10.0.0.1:59180). Jan 20 01:51:04.221269 sshd[6967]: Accepted publickey for core from 10.0.0.1 port 59180 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:51:04.237394 sshd-session[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:51:04.320890 systemd-logind[1586]: New session 52 of user core. Jan 20 01:51:04.357575 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 01:51:05.628472 sshd[6970]: Connection closed by 10.0.0.1 port 59180 Jan 20 01:51:05.630281 sshd-session[6967]: pam_unix(sshd:session): session closed for user core Jan 20 01:51:05.689504 systemd[1]: sshd@51-10.0.0.36:22-10.0.0.1:59180.service: Deactivated successfully. Jan 20 01:51:05.734369 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 01:51:05.744672 systemd-logind[1586]: Session 52 logged out. Waiting for processes to exit. Jan 20 01:51:05.764677 systemd-logind[1586]: Removed session 52. Jan 20 01:51:10.702147 systemd[1]: Started sshd@52-10.0.0.36:22-10.0.0.1:60938.service - OpenSSH per-connection server daemon (10.0.0.1:60938). Jan 20 01:51:11.028344 sshd[7003]: Accepted publickey for core from 10.0.0.1 port 60938 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:51:11.044125 sshd-session[7003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:51:11.131526 systemd-logind[1586]: New session 53 of user core. Jan 20 01:51:11.173713 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 01:51:12.106937 sshd[7006]: Connection closed by 10.0.0.1 port 60938 Jan 20 01:51:12.103530 sshd-session[7003]: pam_unix(sshd:session): session closed for user core Jan 20 01:51:12.129167 systemd[1]: sshd@52-10.0.0.36:22-10.0.0.1:60938.service: Deactivated successfully. Jan 20 01:51:12.145902 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 01:51:12.162793 systemd-logind[1586]: Session 53 logged out. Waiting for processes to exit. Jan 20 01:51:12.165635 systemd-logind[1586]: Removed session 53. Jan 20 01:51:17.974889 systemd[1]: Started sshd@53-10.0.0.36:22-10.0.0.1:37684.service - OpenSSH per-connection server daemon (10.0.0.1:37684). Jan 20 01:51:18.249702 kubelet[3013]: E0120 01:51:18.249374 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:19.521968 sshd[7042]: Accepted publickey for core from 10.0.0.1 port 37684 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:51:19.524492 sshd-session[7042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:51:19.560397 systemd-logind[1586]: New session 54 of user core. Jan 20 01:51:19.624542 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 01:51:20.360499 sshd[7062]: Connection closed by 10.0.0.1 port 37684 Jan 20 01:51:20.386001 sshd-session[7042]: pam_unix(sshd:session): session closed for user core Jan 20 01:51:20.419746 systemd[1]: sshd@53-10.0.0.36:22-10.0.0.1:37684.service: Deactivated successfully. Jan 20 01:51:20.422191 systemd-logind[1586]: Session 54 logged out. Waiting for processes to exit. Jan 20 01:51:20.433619 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 01:51:20.465591 systemd-logind[1586]: Removed session 54. Jan 20 01:51:20.549010 kubelet[3013]: E0120 01:51:20.548829 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:25.488731 systemd[1]: Started sshd@54-10.0.0.36:22-10.0.0.1:40280.service - OpenSSH per-connection server daemon (10.0.0.1:40280). Jan 20 01:51:25.810517 sshd[7095]: Accepted publickey for core from 10.0.0.1 port 40280 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:51:25.809895 sshd-session[7095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:51:25.847811 systemd-logind[1586]: New session 55 of user core. Jan 20 01:51:25.882825 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 01:51:26.608999 sshd[7098]: Connection closed by 10.0.0.1 port 40280 Jan 20 01:51:26.614668 sshd-session[7095]: pam_unix(sshd:session): session closed for user core Jan 20 01:51:26.647281 systemd[1]: sshd@54-10.0.0.36:22-10.0.0.1:40280.service: Deactivated successfully. Jan 20 01:51:26.676391 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 01:51:26.690626 systemd-logind[1586]: Session 55 logged out. Waiting for processes to exit. Jan 20 01:51:26.744140 systemd-logind[1586]: Removed session 55. Jan 20 01:51:31.723696 systemd[1]: Started sshd@55-10.0.0.36:22-10.0.0.1:40292.service - OpenSSH per-connection server daemon (10.0.0.1:40292). Jan 20 01:51:32.143364 sshd[7134]: Accepted publickey for core from 10.0.0.1 port 40292 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:51:32.147722 sshd-session[7134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:51:32.205020 systemd-logind[1586]: New session 56 of user core. Jan 20 01:51:32.227523 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 01:51:32.567276 kubelet[3013]: E0120 01:51:32.566695 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:32.966737 sshd[7139]: Connection closed by 10.0.0.1 port 40292 Jan 20 01:51:32.968799 sshd-session[7134]: pam_unix(sshd:session): session closed for user core Jan 20 01:51:33.023405 systemd[1]: sshd@55-10.0.0.36:22-10.0.0.1:40292.service: Deactivated successfully. Jan 20 01:51:33.049553 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 01:51:33.056903 systemd-logind[1586]: Session 56 logged out. Waiting for processes to exit. Jan 20 01:51:33.065628 systemd-logind[1586]: Removed session 56. Jan 20 01:51:38.049274 systemd[1]: Started sshd@56-10.0.0.36:22-10.0.0.1:36026.service - OpenSSH per-connection server daemon (10.0.0.1:36026). Jan 20 01:51:38.397791 sshd[7172]: Accepted publickey for core from 10.0.0.1 port 36026 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:51:38.402510 sshd-session[7172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:51:38.517357 systemd-logind[1586]: New session 57 of user core. Jan 20 01:51:38.553624 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 01:51:39.586421 sshd[7175]: Connection closed by 10.0.0.1 port 36026 Jan 20 01:51:39.577784 sshd-session[7172]: pam_unix(sshd:session): session closed for user core Jan 20 01:51:39.629749 systemd[1]: sshd@56-10.0.0.36:22-10.0.0.1:36026.service: Deactivated successfully. Jan 20 01:51:39.659184 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 01:51:39.662815 systemd-logind[1586]: Session 57 logged out. Waiting for processes to exit. Jan 20 01:51:39.684076 systemd-logind[1586]: Removed session 57. Jan 20 01:51:44.667149 systemd[1]: Started sshd@57-10.0.0.36:22-10.0.0.1:45804.service - OpenSSH per-connection server daemon (10.0.0.1:45804). Jan 20 01:51:44.941017 sshd[7214]: Accepted publickey for core from 10.0.0.1 port 45804 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:51:44.942978 sshd-session[7214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:51:44.980107 systemd-logind[1586]: New session 58 of user core. Jan 20 01:51:45.011099 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 01:51:45.567873 kubelet[3013]: E0120 01:51:45.565020 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:45.861459 sshd[7220]: Connection closed by 10.0.0.1 port 45804 Jan 20 01:51:45.854521 sshd-session[7214]: pam_unix(sshd:session): session closed for user core Jan 20 01:51:45.897807 systemd[1]: sshd@57-10.0.0.36:22-10.0.0.1:45804.service: Deactivated successfully. Jan 20 01:51:45.933184 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 01:51:45.958494 systemd-logind[1586]: Session 58 logged out. Waiting for processes to exit. Jan 20 01:51:45.980902 systemd-logind[1586]: Removed session 58. Jan 20 01:51:50.550519 kubelet[3013]: E0120 01:51:50.545902 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:51:50.911612 systemd[1]: Started sshd@58-10.0.0.36:22-10.0.0.1:45808.service - OpenSSH per-connection server daemon (10.0.0.1:45808). Jan 20 01:51:51.314106 sshd[7264]: Accepted publickey for core from 10.0.0.1 port 45808 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:51:51.330665 sshd-session[7264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:51:51.414424 systemd-logind[1586]: New session 59 of user core. Jan 20 01:51:51.456857 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 01:51:52.459343 sshd[7267]: Connection closed by 10.0.0.1 port 45808 Jan 20 01:51:52.463740 sshd-session[7264]: pam_unix(sshd:session): session closed for user core Jan 20 01:51:52.511016 systemd[1]: sshd@58-10.0.0.36:22-10.0.0.1:45808.service: Deactivated successfully. Jan 20 01:51:52.548480 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 01:51:52.566461 systemd-logind[1586]: Session 59 logged out. Waiting for processes to exit. Jan 20 01:51:52.583900 systemd-logind[1586]: Removed session 59. Jan 20 01:51:55.244634 containerd[1601]: time="2026-01-20T01:51:55.244320521Z" level=warning msg="container event discarded" container=149e345664180112b35af15ba1a490fba8b12f0a105dc89083f917ffa5538170 type=CONTAINER_STOPPED_EVENT Jan 20 01:51:55.274410 containerd[1601]: time="2026-01-20T01:51:55.268140703Z" level=warning msg="container event discarded" container=61a436f7918100d95644965b70a2b3ffae0d29f4fed1d531ef4c6a2d1416eada type=CONTAINER_STOPPED_EVENT Jan 20 01:51:56.537974 containerd[1601]: time="2026-01-20T01:51:56.527112559Z" level=warning msg="container event discarded" container=b5dc8c59c768559a6cefe89e19cfcf215601b5cd434629d6ff18840d5516d240 type=CONTAINER_DELETED_EVENT Jan 20 01:51:56.823441 containerd[1601]: time="2026-01-20T01:51:56.823259341Z" level=warning msg="container event discarded" container=0a2bd9388bf1e250f044c821b212538b7020b100ed161ba59e161984bdec533d type=CONTAINER_DELETED_EVENT Jan 20 01:51:57.033647 containerd[1601]: time="2026-01-20T01:51:57.033188403Z" level=warning msg="container event discarded" container=bc4a612bb6bde8ec4faf2d866c21d12387a74b84db707c84ce76b379753305dc type=CONTAINER_CREATED_EVENT Jan 20 01:51:57.557678 systemd[1]: Started sshd@59-10.0.0.36:22-10.0.0.1:42758.service - OpenSSH per-connection server daemon (10.0.0.1:42758). Jan 20 01:51:58.057835 sshd[7301]: Accepted publickey for core from 10.0.0.1 port 42758 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:51:58.098508 sshd-session[7301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:51:58.151648 systemd-logind[1586]: New session 60 of user core. Jan 20 01:51:58.169882 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 01:51:59.266110 sshd[7304]: Connection closed by 10.0.0.1 port 42758 Jan 20 01:51:59.268627 sshd-session[7301]: pam_unix(sshd:session): session closed for user core Jan 20 01:51:59.317901 systemd[1]: sshd@59-10.0.0.36:22-10.0.0.1:42758.service: Deactivated successfully. Jan 20 01:51:59.346715 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 01:51:59.400853 systemd-logind[1586]: Session 60 logged out. Waiting for processes to exit. Jan 20 01:51:59.441459 systemd-logind[1586]: Removed session 60. Jan 20 01:51:59.640948 containerd[1601]: time="2026-01-20T01:51:59.634519822Z" level=warning msg="container event discarded" container=bc4a612bb6bde8ec4faf2d866c21d12387a74b84db707c84ce76b379753305dc type=CONTAINER_STARTED_EVENT Jan 20 01:52:01.560275 kubelet[3013]: E0120 01:52:01.549980 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:04.353678 systemd[1]: Started sshd@60-10.0.0.36:22-10.0.0.1:42768.service - OpenSSH per-connection server daemon (10.0.0.1:42768). Jan 20 01:52:04.596303 sshd[7339]: Accepted publickey for core from 10.0.0.1 port 42768 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:52:04.609127 sshd-session[7339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:52:04.663356 systemd-logind[1586]: New session 61 of user core. Jan 20 01:52:04.694690 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 01:52:05.512440 sshd[7346]: Connection closed by 10.0.0.1 port 42768 Jan 20 01:52:05.514562 sshd-session[7339]: pam_unix(sshd:session): session closed for user core Jan 20 01:52:05.562458 systemd[1]: sshd@60-10.0.0.36:22-10.0.0.1:42768.service: Deactivated successfully. Jan 20 01:52:05.614429 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 01:52:05.660488 systemd-logind[1586]: Session 61 logged out. Waiting for processes to exit. Jan 20 01:52:05.700915 systemd-logind[1586]: Removed session 61. Jan 20 01:52:10.566778 systemd[1]: Started sshd@61-10.0.0.36:22-10.0.0.1:44476.service - OpenSSH per-connection server daemon (10.0.0.1:44476). Jan 20 01:52:10.779462 sshd[7383]: Accepted publickey for core from 10.0.0.1 port 44476 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:52:10.786003 sshd-session[7383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:52:10.823287 systemd-logind[1586]: New session 62 of user core. Jan 20 01:52:10.843480 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 01:52:11.514959 sshd[7386]: Connection closed by 10.0.0.1 port 44476 Jan 20 01:52:11.514727 sshd-session[7383]: pam_unix(sshd:session): session closed for user core Jan 20 01:52:11.532137 systemd-logind[1586]: Session 62 logged out. Waiting for processes to exit. Jan 20 01:52:11.535168 systemd[1]: sshd@61-10.0.0.36:22-10.0.0.1:44476.service: Deactivated successfully. Jan 20 01:52:11.547781 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 01:52:11.566699 systemd-logind[1586]: Removed session 62. Jan 20 01:52:15.546266 kubelet[3013]: E0120 01:52:15.545954 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:16.585814 systemd[1]: Started sshd@62-10.0.0.36:22-10.0.0.1:36720.service - OpenSSH per-connection server daemon (10.0.0.1:36720). Jan 20 01:52:17.059318 sshd[7419]: Accepted publickey for core from 10.0.0.1 port 36720 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:52:17.070355 sshd-session[7419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:52:17.118948 systemd-logind[1586]: New session 63 of user core. Jan 20 01:52:17.129987 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 01:52:18.265397 sshd[7436]: Connection closed by 10.0.0.1 port 36720 Jan 20 01:52:18.267532 sshd-session[7419]: pam_unix(sshd:session): session closed for user core Jan 20 01:52:18.319559 systemd[1]: sshd@62-10.0.0.36:22-10.0.0.1:36720.service: Deactivated successfully. Jan 20 01:52:18.336836 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 01:52:18.353535 systemd-logind[1586]: Session 63 logged out. Waiting for processes to exit. Jan 20 01:52:18.391985 systemd-logind[1586]: Removed session 63. Jan 20 01:52:18.404849 containerd[1601]: time="2026-01-20T01:52:18.404751687Z" level=warning msg="container event discarded" container=b42ec3fc362f55179a7803ca896b179518967bf26b848561a825681fdf8c5d3a type=CONTAINER_CREATED_EVENT Jan 20 01:52:19.870922 containerd[1601]: time="2026-01-20T01:52:19.870714882Z" level=warning msg="container event discarded" container=b42ec3fc362f55179a7803ca896b179518967bf26b848561a825681fdf8c5d3a type=CONTAINER_STARTED_EVENT Jan 20 01:52:24.835724 kubelet[3013]: E0120 01:52:24.835610 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:25.242351 systemd[1]: Started sshd@63-10.0.0.36:22-10.0.0.1:36730.service - OpenSSH per-connection server daemon (10.0.0.1:36730). Jan 20 01:52:25.727092 sshd[7469]: Accepted publickey for core from 10.0.0.1 port 36730 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:52:25.741649 sshd-session[7469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:52:25.813677 systemd-logind[1586]: New session 64 of user core. Jan 20 01:52:25.855400 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 01:52:26.549687 sshd[7478]: Connection closed by 10.0.0.1 port 36730 Jan 20 01:52:26.552454 sshd-session[7469]: pam_unix(sshd:session): session closed for user core Jan 20 01:52:26.587750 systemd[1]: sshd@63-10.0.0.36:22-10.0.0.1:36730.service: Deactivated successfully. Jan 20 01:52:26.594086 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 01:52:26.616983 systemd-logind[1586]: Session 64 logged out. Waiting for processes to exit. Jan 20 01:52:26.635838 systemd-logind[1586]: Removed session 64. Jan 20 01:52:31.635306 systemd[1]: Started sshd@64-10.0.0.36:22-10.0.0.1:59546.service - OpenSSH per-connection server daemon (10.0.0.1:59546). Jan 20 01:52:31.929684 sshd[7512]: Accepted publickey for core from 10.0.0.1 port 59546 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:52:31.935160 sshd-session[7512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:52:31.972483 systemd-logind[1586]: New session 65 of user core. Jan 20 01:52:31.999629 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 01:52:32.551844 kubelet[3013]: E0120 01:52:32.548482 3013 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:32.897666 sshd[7517]: Connection closed by 10.0.0.1 port 59546 Jan 20 01:52:32.896912 sshd-session[7512]: pam_unix(sshd:session): session closed for user core Jan 20 01:52:32.936150 systemd[1]: sshd@64-10.0.0.36:22-10.0.0.1:59546.service: Deactivated successfully. Jan 20 01:52:32.945186 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 01:52:32.959526 systemd-logind[1586]: Session 65 logged out. Waiting for processes to exit. Jan 20 01:52:32.969150 systemd-logind[1586]: Removed session 65. Jan 20 01:52:37.956037 systemd[1]: Started sshd@65-10.0.0.36:22-10.0.0.1:60580.service - OpenSSH per-connection server daemon (10.0.0.1:60580). Jan 20 01:52:38.227268 sshd[7564]: Accepted publickey for core from 10.0.0.1 port 60580 ssh2: RSA SHA256:XEBmLlSTiiubxdx4UPGJFskIr8d6O+zi4bdjyt9s1Hk Jan 20 01:52:38.229375 sshd-session[7564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:52:38.298363 systemd-logind[1586]: New session 66 of user core. Jan 20 01:52:38.334634 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 01:52:39.663588 sshd[7571]: Connection closed by 10.0.0.1 port 60580 Jan 20 01:52:39.662905 sshd-session[7564]: pam_unix(sshd:session): session closed for user core Jan 20 01:52:39.690577 systemd[1]: sshd@65-10.0.0.36:22-10.0.0.1:60580.service: Deactivated successfully. Jan 20 01:52:39.711549 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 01:52:39.724676 systemd-logind[1586]: Session 66 logged out. Waiting for processes to exit. Jan 20 01:52:39.741246 systemd-logind[1586]: Removed session 66.