Mar 7 01:18:55.686419 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:18:55.686454 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:18:55.686473 kernel: BIOS-provided physical RAM map: Mar 7 01:18:55.686482 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:18:55.686491 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:18:55.686499 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:18:55.686509 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:18:55.686518 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:18:55.686526 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:18:55.686539 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:18:55.686547 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:18:55.686557 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:18:55.686568 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:18:55.686579 kernel: NX (Execute Disable) protection: active Mar 7 01:18:55.686592 kernel: APIC: Static calls initialized Mar 7 01:18:55.686608 kernel: SMBIOS 2.8 present. Mar 7 01:18:55.686619 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:18:55.686631 kernel: Hypervisor detected: KVM Mar 7 01:18:55.686642 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:18:55.686654 kernel: kvm-clock: using sched offset of 37274793496 cycles Mar 7 01:18:55.686668 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:18:55.686678 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:18:55.686690 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:18:55.686703 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:18:55.686721 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:18:55.686734 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:18:55.686745 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:18:55.686757 kernel: Using GB pages for direct mapping Mar 7 01:18:55.686769 kernel: ACPI: Early table checksum verification disabled Mar 7 01:18:55.686779 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:18:55.686791 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:18:55.686803 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:18:55.686815 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:18:55.686835 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:18:55.686846 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:18:55.686857 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:18:55.686867 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:18:55.686878 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:18:55.686888 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:18:55.686899 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:18:55.686919 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:18:55.686937 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:18:55.686948 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:18:55.686959 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:18:55.686970 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:18:55.686981 kernel: No NUMA configuration found Mar 7 01:18:55.686992 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:18:55.687009 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 01:18:55.687021 kernel: Zone ranges: Mar 7 01:18:55.687033 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:18:55.687046 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:18:55.687118 kernel: Normal empty Mar 7 01:18:55.687136 kernel: Movable zone start for each node Mar 7 01:18:55.687148 kernel: Early memory node ranges Mar 7 01:18:55.687160 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:18:55.687172 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:18:55.687184 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:18:55.687205 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:18:55.687216 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:18:55.687226 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:18:55.687236 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:18:55.687247 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:18:55.687258 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:18:55.693190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:18:55.693204 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:18:55.693215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:18:55.693240 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:18:55.693250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:18:55.693298 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:18:55.693313 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:18:55.693326 kernel: TSC deadline timer available Mar 7 01:18:55.693337 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:18:55.693348 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:18:55.693358 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:18:55.693369 kernel: kvm-guest: setup PV sched yield Mar 7 01:18:55.693386 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:18:55.693397 kernel: Booting paravirtualized kernel on KVM Mar 7 01:18:55.693409 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:18:55.693420 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:18:55.693431 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:18:55.693444 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:18:55.693458 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:18:55.693470 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:18:55.693483 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:18:55.693673 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:18:55.693688 kernel: random: crng init done Mar 7 01:18:55.693699 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:18:55.693710 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:18:55.693720 kernel: Fallback order for Node 0: 0 Mar 7 01:18:55.693730 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 01:18:55.693740 kernel: Policy zone: DMA32 Mar 7 01:18:55.693751 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:18:55.693770 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 7 01:18:55.693782 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:18:55.693793 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:18:55.693805 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:18:55.693816 kernel: Dynamic Preempt: voluntary Mar 7 01:18:55.693828 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:18:55.693839 kernel: rcu: RCU event tracing is enabled. Mar 7 01:18:55.693850 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:18:55.693861 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:18:55.693879 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:18:55.693891 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:18:55.693902 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:18:55.693913 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:18:55.693925 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:18:55.693936 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:18:55.693948 kernel: Console: colour VGA+ 80x25 Mar 7 01:18:55.693959 kernel: printk: console [ttyS0] enabled Mar 7 01:18:55.693969 kernel: ACPI: Core revision 20230628 Mar 7 01:18:55.693981 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:18:55.693999 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:18:55.694010 kernel: x2apic enabled Mar 7 01:18:55.694021 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:18:55.694032 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:18:55.694043 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:18:55.694053 kernel: kvm-guest: setup PV IPIs Mar 7 01:18:55.695387 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:18:55.695425 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:18:55.695440 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:18:55.695453 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:18:55.695466 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:18:55.695485 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:18:55.695498 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:18:55.695511 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:18:55.695523 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:18:55.695535 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:18:55.695554 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:18:55.695567 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:18:55.695577 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:18:55.695587 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:18:55.695599 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:18:55.695611 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:18:55.695624 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:18:55.695636 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:18:55.695655 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:18:55.695669 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:18:55.695681 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:18:55.695693 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:18:55.695704 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:18:55.695716 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:18:55.695727 kernel: landlock: Up and running. Mar 7 01:18:55.695739 kernel: SELinux: Initializing. Mar 7 01:18:55.695751 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:18:55.695770 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:18:55.695783 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:18:55.695797 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:18:55.695810 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:18:55.695823 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:18:55.695837 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:18:55.695850 kernel: signal: max sigframe size: 1776 Mar 7 01:18:55.695863 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:18:55.695875 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:18:55.695894 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:18:55.695906 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:18:55.695918 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:18:55.695930 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:18:55.695941 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:18:55.695952 kernel: smpboot: Max logical packages: 1 Mar 7 01:18:55.695962 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:18:55.695973 kernel: devtmpfs: initialized Mar 7 01:18:55.695983 kernel: x86/mm: Memory block size: 128MB Mar 7 01:18:55.695998 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:18:55.696009 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:18:55.696020 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:18:55.696031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:18:55.696041 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:18:55.698389 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:18:55.698413 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:18:55.698426 kernel: audit: type=2000 audit(1772846320.140:1): state=initialized audit_enabled=0 res=1 Mar 7 01:18:55.698440 kernel: cpuidle: using governor menu Mar 7 01:18:55.698463 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:18:55.698477 kernel: dca service started, version 1.12.1 Mar 7 01:18:55.698491 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:18:55.698505 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:18:55.698518 kernel: PCI: Using configuration type 1 for base access Mar 7 01:18:55.698532 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:18:55.698544 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:18:55.698555 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:18:55.698568 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:18:55.698589 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:18:55.698603 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:18:55.698616 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:18:55.698627 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:18:55.698638 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:18:55.698649 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:18:55.698660 kernel: ACPI: Interpreter enabled Mar 7 01:18:55.698670 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:18:55.698681 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:18:55.698696 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:18:55.698707 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:18:55.698720 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:18:55.698733 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:18:55.699211 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:18:55.703906 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:18:55.704254 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:18:55.704329 kernel: PCI host bridge to bus 0000:00 Mar 7 01:18:55.704581 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:18:55.704810 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:18:55.705031 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:18:55.707622 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:18:55.707833 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:18:55.708050 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:18:55.709429 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:18:55.709687 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:18:55.709923 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:18:55.710238 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:18:55.716777 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:18:55.716987 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:18:55.717237 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:18:55.718566 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 18554 usecs Mar 7 01:18:55.718845 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:18:55.719172 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 01:18:55.719455 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:18:55.719695 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:18:55.719954 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:18:55.721513 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 01:18:55.721741 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:18:55.721981 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:18:55.722360 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:18:55.722603 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 01:18:55.722845 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:18:55.723156 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:18:55.728826 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:18:55.729174 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:18:55.730027 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:18:55.742502 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 12695 usecs Mar 7 01:18:55.742799 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:18:55.743051 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 01:18:55.743447 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 01:18:55.743727 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:18:55.743970 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:18:55.743994 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:18:55.744007 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:18:55.744021 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:18:55.744034 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:18:55.744047 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:18:55.746374 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:18:55.746407 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:18:55.746422 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:18:55.746435 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:18:55.746449 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:18:55.746462 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:18:55.746475 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:18:55.746488 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:18:55.746502 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:18:55.746515 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:18:55.746535 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:18:55.746548 kernel: iommu: Default domain type: Translated Mar 7 01:18:55.746562 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:18:55.746575 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:18:55.746588 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:18:55.746602 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:18:55.746614 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:18:55.746870 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:18:55.748626 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:18:55.748877 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:18:55.748899 kernel: vgaarb: loaded Mar 7 01:18:55.748913 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:18:55.748927 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:18:55.748939 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:18:55.748951 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:18:55.748963 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:18:55.748973 kernel: pnp: PnP ACPI init Mar 7 01:18:55.751698 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:18:55.751737 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:18:55.751753 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:18:55.751766 kernel: NET: Registered PF_INET protocol family Mar 7 01:18:55.751780 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:18:55.751792 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:18:55.751804 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:18:55.751817 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:18:55.751829 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:18:55.751850 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:18:55.751864 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:18:55.751878 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:18:55.751891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:18:55.751905 kernel: NET: Registered PF_XDP protocol family Mar 7 01:18:55.752231 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:18:55.752510 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:18:55.752703 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:18:55.752884 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:18:55.753120 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:18:55.753354 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:18:55.753375 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:18:55.753387 kernel: Initialise system trusted keyrings Mar 7 01:18:55.753400 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:18:55.753413 kernel: Key type asymmetric registered Mar 7 01:18:55.753425 kernel: Asymmetric key parser 'x509' registered Mar 7 01:18:55.753438 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:18:55.753459 kernel: io scheduler mq-deadline registered Mar 7 01:18:55.753472 kernel: io scheduler kyber registered Mar 7 01:18:55.753485 kernel: io scheduler bfq registered Mar 7 01:18:55.753498 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:18:55.753511 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:18:55.753524 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:18:55.753537 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:18:55.753550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:18:55.753564 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:18:55.753583 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:18:55.753596 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:18:55.753609 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:18:55.753853 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:18:55.753875 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 7 01:18:55.754173 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:18:55.754446 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:18:52 UTC (1772846332) Mar 7 01:18:55.754673 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:18:55.754701 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:18:55.754715 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:18:55.754729 kernel: Segment Routing with IPv6 Mar 7 01:18:55.754741 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:18:55.754754 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:18:55.754767 kernel: Key type dns_resolver registered Mar 7 01:18:55.754780 kernel: IPI shorthand broadcast: enabled Mar 7 01:18:55.754793 kernel: sched_clock: Marking stable (8387020992, 811318392)->(13039538634, -3841199250) Mar 7 01:18:55.754806 kernel: registered taskstats version 1 Mar 7 01:18:55.754825 kernel: Loading compiled-in X.509 certificates Mar 7 01:18:55.754839 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:18:55.754852 kernel: Key type .fscrypt registered Mar 7 01:18:55.754865 kernel: Key type fscrypt-provisioning registered Mar 7 01:18:55.754878 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:18:55.754891 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:18:55.754904 kernel: ima: No architecture policies found Mar 7 01:18:55.754917 kernel: clk: Disabling unused clocks Mar 7 01:18:55.754930 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:18:55.754950 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:18:55.754963 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:18:55.754976 kernel: Run /init as init process Mar 7 01:18:55.754989 kernel: with arguments: Mar 7 01:18:55.755002 kernel: /init Mar 7 01:18:55.755014 kernel: with environment: Mar 7 01:18:55.755027 kernel: HOME=/ Mar 7 01:18:55.755039 kernel: TERM=linux Mar 7 01:18:55.755056 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:18:55.755167 systemd[1]: Detected virtualization kvm. Mar 7 01:18:55.755183 systemd[1]: Detected architecture x86-64. Mar 7 01:18:55.755196 systemd[1]: Running in initrd. Mar 7 01:18:55.755209 systemd[1]: No hostname configured, using default hostname. Mar 7 01:18:55.755223 systemd[1]: Hostname set to . Mar 7 01:18:55.755237 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:18:55.755250 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:18:55.760823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:18:55.760851 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:18:55.760868 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:18:55.760884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:18:55.760899 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:18:55.760912 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:18:55.760928 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:18:55.760954 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:18:55.760969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:18:55.760983 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:18:55.760998 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:18:55.761037 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:18:55.761118 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:18:55.761148 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:18:55.761163 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:18:55.761177 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:18:55.761192 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:18:55.761207 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:18:55.761222 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:18:55.761237 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:18:55.761250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:18:55.761306 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:18:55.761331 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:18:55.761347 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:18:55.761360 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:18:55.761375 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:18:55.761390 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:18:55.761404 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:18:55.761419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:18:55.761498 systemd-journald[194]: Collecting audit messages is disabled. Mar 7 01:18:55.761541 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:18:55.761557 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:18:55.761571 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:18:55.761592 systemd-journald[194]: Journal started Mar 7 01:18:55.761619 systemd-journald[194]: Runtime Journal (/run/log/journal/a8ac1047cf684690b6c55a10cae4cbd4) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:18:55.766251 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:18:55.999582 systemd-modules-load[195]: Inserted module 'overlay' Mar 7 01:18:56.626038 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:18:56.626302 kernel: Bridge firewalling registered Mar 7 01:18:56.009915 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:18:56.161376 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 7 01:18:56.644811 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:18:56.652948 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:18:56.653469 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:56.654311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:18:56.750168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:18:56.839394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:18:56.956181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:18:57.028815 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:18:57.081934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:18:57.133969 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:18:57.150412 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:18:57.162873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:18:57.269761 dracut-cmdline[229]: dracut-dracut-053 Mar 7 01:18:57.212884 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:18:57.312052 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:18:57.520601 systemd-resolved[238]: Positive Trust Anchors: Mar 7 01:18:57.522854 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:18:57.525430 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:18:57.549802 systemd-resolved[238]: Defaulting to hostname 'linux'. Mar 7 01:18:57.564253 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:18:57.708216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:18:57.893544 kernel: SCSI subsystem initialized Mar 7 01:18:57.925822 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:18:57.991187 kernel: iscsi: registered transport (tcp) Mar 7 01:18:58.060342 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:18:58.060712 kernel: QLogic iSCSI HBA Driver Mar 7 01:18:58.311586 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:18:58.368500 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:18:58.479399 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:18:58.479491 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:18:58.481417 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:18:58.618347 kernel: raid6: avx2x4 gen() 17573 MB/s Mar 7 01:18:58.638352 kernel: raid6: avx2x2 gen() 14702 MB/s Mar 7 01:18:58.662591 kernel: raid6: avx2x1 gen() 9366 MB/s Mar 7 01:18:58.663913 kernel: raid6: using algorithm avx2x4 gen() 17573 MB/s Mar 7 01:18:58.685394 kernel: raid6: .... xor() 2727 MB/s, rmw enabled Mar 7 01:18:58.685992 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:18:58.775715 kernel: xor: automatically using best checksumming function avx Mar 7 01:18:59.541465 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:18:59.619168 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:18:59.670582 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:18:59.724808 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 7 01:18:59.738778 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:18:59.812816 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:18:59.919475 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 7 01:19:00.081485 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:19:00.112024 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:19:00.586887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:19:00.664942 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:19:00.823902 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:19:00.872923 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:19:00.913890 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:19:00.957382 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:19:01.071706 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:19:01.253019 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:19:01.255176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:19:01.290537 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:19:01.300423 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:19:01.300770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:19:01.314175 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:19:01.460204 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:19:01.460665 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:19:01.499767 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:19:01.548679 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:19:01.549032 kernel: GPT:9289727 != 19775487 Mar 7 01:19:01.549135 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:19:01.571053 kernel: GPT:9289727 != 19775487 Mar 7 01:19:01.571203 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:19:01.571229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:19:01.561232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:19:01.648791 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:19:02.654901 kernel: libata version 3.00 loaded. Mar 7 01:19:02.947152 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:19:02.947661 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:19:02.949464 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:19:02.962533 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:19:02.965871 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:19:03.215604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:19:03.285589 kernel: AES CTR mode by8 optimization enabled Mar 7 01:19:03.351619 kernel: scsi host0: ahci Mar 7 01:19:03.372634 kernel: scsi host1: ahci Mar 7 01:19:03.374588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:19:03.450366 kernel: scsi host2: ahci Mar 7 01:19:03.450742 kernel: scsi host3: ahci Mar 7 01:19:03.462157 kernel: scsi host4: ahci Mar 7 01:19:03.471133 kernel: scsi host5: ahci Mar 7 01:19:03.522814 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Mar 7 01:19:03.522899 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Mar 7 01:19:03.522919 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Mar 7 01:19:03.522937 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Mar 7 01:19:03.522954 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Mar 7 01:19:03.522982 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Mar 7 01:19:03.522998 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Mar 7 01:19:03.579040 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:19:03.684865 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (465) Mar 7 01:19:03.684527 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:19:03.796459 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:19:03.897478 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:19:04.028870 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:19:04.028917 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:19:04.028940 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:19:04.028959 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:19:04.028978 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:19:04.029016 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:19:04.029037 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:19:04.003992 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:19:04.052600 kernel: ata3.00: applying bridge limits Mar 7 01:19:04.052647 kernel: ata3.00: configured for UDMA/100 Mar 7 01:19:04.092338 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:19:04.106913 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:19:04.226251 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:19:04.352842 disk-uuid[564]: Primary Header is updated. Mar 7 01:19:04.352842 disk-uuid[564]: Secondary Entries is updated. Mar 7 01:19:04.352842 disk-uuid[564]: Secondary Header is updated. Mar 7 01:19:04.439038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:19:04.502450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:19:04.547596 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:19:04.894695 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:19:04.899760 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:19:05.022263 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:19:05.568591 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:19:05.590423 disk-uuid[565]: The operation has completed successfully. Mar 7 01:19:07.338636 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:19:07.339024 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:19:07.456680 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:19:07.547460 sh[595]: Success Mar 7 01:19:07.878697 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:19:08.284651 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:19:08.400229 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:19:08.475455 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:19:08.681822 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:19:08.685451 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:19:08.685505 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:19:08.718469 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:19:08.718566 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:19:08.958188 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:19:09.037672 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:19:09.109706 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:19:09.143026 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:19:09.444188 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:19:09.445042 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:19:09.449383 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:19:09.635224 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:19:09.788471 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:19:09.826737 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:19:09.995141 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:19:10.087857 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:19:11.275010 kernel: hrtimer: interrupt took 6192491 ns Mar 7 01:19:12.344557 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:19:12.469711 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:19:12.535431 ignition[698]: Ignition 2.19.0 Mar 7 01:19:12.535445 ignition[698]: Stage: fetch-offline Mar 7 01:19:12.535613 ignition[698]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:19:12.535669 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:19:12.535964 ignition[698]: parsed url from cmdline: "" Mar 7 01:19:12.535971 ignition[698]: no config URL provided Mar 7 01:19:12.535981 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:19:12.535998 ignition[698]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:19:12.536043 ignition[698]: op(1): [started] loading QEMU firmware config module Mar 7 01:19:12.536052 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:19:12.683383 ignition[698]: op(1): [finished] loading QEMU firmware config module Mar 7 01:19:12.965672 systemd-networkd[784]: lo: Link UP Mar 7 01:19:12.967213 systemd-networkd[784]: lo: Gained carrier Mar 7 01:19:12.977006 systemd-networkd[784]: Enumeration completed Mar 7 01:19:12.987805 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:19:12.991309 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:19:12.991361 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:19:13.200939 ignition[698]: parsing config with SHA512: 5f5c68a08b6cc755efd4882a05e1ffcbdc9512501426cb58fe1382162c268e5c5ce15e790cfb528050280e74b00cc4073c8f66fa41644899f7ac07ecc629b122 Mar 7 01:19:13.019981 systemd-networkd[784]: eth0: Link UP Mar 7 01:19:13.295010 ignition[698]: fetch-offline: fetch-offline passed Mar 7 01:19:13.019990 systemd-networkd[784]: eth0: Gained carrier Mar 7 01:19:13.295310 ignition[698]: Ignition finished successfully Mar 7 01:19:13.020009 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:19:13.151431 systemd[1]: Reached target network.target - Network. Mar 7 01:19:13.294173 unknown[698]: fetched base config from "system" Mar 7 01:19:13.294188 unknown[698]: fetched user config from "qemu" Mar 7 01:19:13.303827 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:19:13.317490 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:19:13.449042 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:19:13.480308 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:19:13.943935 ignition[789]: Ignition 2.19.0 Mar 7 01:19:13.943954 ignition[789]: Stage: kargs Mar 7 01:19:13.944295 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:19:13.944313 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:19:14.018246 ignition[789]: kargs: kargs passed Mar 7 01:19:14.019619 ignition[789]: Ignition finished successfully Mar 7 01:19:14.104962 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:19:14.206176 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:19:14.476244 ignition[798]: Ignition 2.19.0 Mar 7 01:19:14.476267 ignition[798]: Stage: disks Mar 7 01:19:14.476844 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:19:14.476867 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:19:14.545188 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:19:14.485947 ignition[798]: disks: disks passed Mar 7 01:19:14.596634 systemd-networkd[784]: eth0: Gained IPv6LL Mar 7 01:19:14.486187 ignition[798]: Ignition finished successfully Mar 7 01:19:14.679739 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:19:14.729666 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:19:14.729785 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:19:14.729860 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:19:14.729926 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:19:15.058916 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:19:15.207045 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:19:15.242903 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:19:15.381657 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:19:17.435956 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:19:17.478040 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:19:17.495746 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:19:17.570564 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:19:17.611521 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:19:17.628715 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:19:17.628816 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:19:17.628879 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:19:17.702495 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 7 01:19:17.781543 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:19:17.781622 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:19:17.781639 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:19:17.904645 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:19:17.959430 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:19:18.005511 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:19:18.046486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:19:18.702682 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:19:18.794635 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:19:18.905381 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:19:19.055754 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:19:20.828251 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:19:20.891630 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:19:20.968916 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:19:21.006889 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:19:21.005597 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:19:21.339788 ignition[932]: INFO : Ignition 2.19.0 Mar 7 01:19:21.339788 ignition[932]: INFO : Stage: mount Mar 7 01:19:21.339788 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:19:21.339788 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:19:21.339788 ignition[932]: INFO : mount: mount passed Mar 7 01:19:21.339788 ignition[932]: INFO : Ignition finished successfully Mar 7 01:19:21.453205 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:19:21.477523 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:19:21.483740 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:19:21.558444 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:19:21.606485 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Mar 7 01:19:21.621461 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:19:21.621577 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:19:21.621603 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:19:21.655435 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:19:21.661938 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:19:21.970511 ignition[962]: INFO : Ignition 2.19.0 Mar 7 01:19:21.970511 ignition[962]: INFO : Stage: files Mar 7 01:19:21.989585 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:19:21.989585 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:19:22.067628 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:19:22.067628 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:19:22.067628 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:19:22.153729 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:19:22.153729 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:19:22.153729 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:19:22.153729 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:19:22.153729 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:19:22.085147 unknown[962]: wrote ssh authorized keys file for user: core Mar 7 01:19:22.416031 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:19:23.640436 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:19:23.640436 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:19:23.674878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:19:24.469768 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:19:34.387691 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:19:34.387691 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:19:34.459877 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:19:34.459877 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:19:34.459877 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:19:34.459877 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:19:34.459877 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:19:34.459877 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:19:34.459877 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:19:34.459877 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:19:34.707128 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:19:34.768513 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:19:34.796157 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:19:34.796157 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:19:34.796157 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:19:34.796157 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:19:34.796157 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:19:34.796157 ignition[962]: INFO : files: files passed Mar 7 01:19:34.796157 ignition[962]: INFO : Ignition finished successfully Mar 7 01:19:34.835804 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:19:34.966005 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:19:35.038295 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:19:35.071852 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:19:35.072224 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:19:35.145312 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:19:35.171319 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:19:35.171319 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:19:35.200923 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:19:35.205626 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:19:35.240703 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:19:35.283821 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:19:35.420903 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:19:35.421483 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:19:35.471496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:19:35.487215 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:19:35.505772 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:19:35.608673 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:19:35.742844 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:19:35.822703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:19:35.913978 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:19:35.968759 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:19:35.985695 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:19:35.994031 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:19:35.998633 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:19:36.024633 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:19:36.058824 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:19:36.080700 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:19:36.103643 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:19:36.120785 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:19:36.129999 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:19:36.184686 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:19:36.217696 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:19:36.252377 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:19:36.275882 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:19:36.294152 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:19:36.294756 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:19:36.340785 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:19:36.351347 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:19:36.386685 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:19:36.387770 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:19:36.420267 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:19:36.420811 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:19:36.463219 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:19:36.463674 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:19:36.476279 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:19:36.503322 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:19:36.508633 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:19:36.529820 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:19:36.556913 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:19:36.565778 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:19:36.565931 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:19:36.582899 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:19:36.583131 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:19:36.598851 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:19:36.599294 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:19:36.613176 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:19:36.613452 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:19:36.679916 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:19:36.703665 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:19:36.988468 ignition[1018]: INFO : Ignition 2.19.0 Mar 7 01:19:36.988468 ignition[1018]: INFO : Stage: umount Mar 7 01:19:36.988468 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:19:36.988468 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:19:36.988468 ignition[1018]: INFO : umount: umount passed Mar 7 01:19:36.988468 ignition[1018]: INFO : Ignition finished successfully Mar 7 01:19:36.759721 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:19:36.760042 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:19:36.839941 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:19:36.840308 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:19:36.923799 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:19:36.926770 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:19:36.926980 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:19:36.975217 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:19:36.975475 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:19:36.990369 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:19:36.992708 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:19:37.036699 systemd[1]: Stopped target network.target - Network. Mar 7 01:19:37.046890 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:19:37.047033 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:19:37.047733 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:19:37.047824 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:19:37.049140 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:19:37.049274 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:19:37.176960 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:19:37.177311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:19:37.209882 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:19:37.209994 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:19:37.220277 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:19:37.224977 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:19:37.460713 systemd-networkd[784]: eth0: DHCPv6 lease lost Mar 7 01:19:37.511795 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:19:37.514445 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:19:37.558565 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:19:37.558885 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:19:37.588825 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:19:37.588908 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:19:37.707638 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:19:37.731862 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:19:37.731988 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:19:37.759480 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:19:37.760255 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:19:37.794629 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:19:37.794723 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:19:37.854895 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:19:37.855025 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:19:37.872334 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:19:37.992145 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:19:37.996256 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:19:38.041771 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:19:38.047220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:19:38.065237 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:19:38.065380 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:19:38.138906 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:19:38.139045 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:19:38.206434 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:19:38.209193 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:19:38.318024 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:19:38.318552 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:19:38.368006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:19:38.387589 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:19:38.443790 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:19:38.458635 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:19:38.458762 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:19:38.476647 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:19:38.476750 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:19:38.496653 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:19:38.496759 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:19:38.506229 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:19:38.506318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:19:38.574496 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:19:38.575860 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:19:38.588831 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:19:38.669226 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:19:38.732692 systemd[1]: Switching root. Mar 7 01:19:38.839262 systemd-journald[194]: Journal stopped Mar 7 01:19:45.697805 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 7 01:19:45.697934 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:19:45.697969 kernel: SELinux: policy capability open_perms=1 Mar 7 01:19:45.697991 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:19:45.698022 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:19:45.698047 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:19:45.698178 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:19:45.698209 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:19:45.698228 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:19:45.698246 kernel: audit: type=1403 audit(1772846379.566:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:19:45.698267 systemd[1]: Successfully loaded SELinux policy in 164.416ms. Mar 7 01:19:45.698299 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 133.920ms. Mar 7 01:19:45.698322 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:19:45.698343 systemd[1]: Detected virtualization kvm. Mar 7 01:19:45.698362 systemd[1]: Detected architecture x86-64. Mar 7 01:19:45.698383 systemd[1]: Detected first boot. Mar 7 01:19:45.698410 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:19:45.704265 zram_generator::config[1063]: No configuration found. Mar 7 01:19:45.704298 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:19:45.704316 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:19:45.704333 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:19:45.704350 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:19:45.704407 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:19:45.704465 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:19:45.704492 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:19:45.704509 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:19:45.704530 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:19:45.704549 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:19:45.704566 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:19:45.704586 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:19:45.704648 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:19:45.704671 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:19:45.704691 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:19:45.704723 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:19:45.704741 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:19:45.704761 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:19:45.704780 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:19:45.704798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:19:45.704818 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:19:45.704835 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:19:45.704854 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:19:45.704879 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:19:45.704898 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:19:45.704916 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:19:45.704934 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:19:45.704951 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:19:45.704969 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:19:45.704986 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:19:45.705005 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:19:45.705029 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:19:45.705048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:19:45.705152 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:19:45.705176 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:19:45.705195 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:19:45.705218 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:19:45.705235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:45.705253 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:19:45.705271 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:19:45.705296 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:19:45.705317 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:19:45.705335 systemd[1]: Reached target machines.target - Containers. Mar 7 01:19:45.705400 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:19:45.705469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:19:45.705492 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:19:45.705511 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:19:45.705528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:19:45.705545 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:19:45.705570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:19:45.705589 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:19:45.705607 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:19:45.705627 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:19:45.705645 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:19:45.705663 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:19:45.705681 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:19:45.705706 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:19:45.705731 kernel: fuse: init (API version 7.39) Mar 7 01:19:45.705752 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:19:45.705769 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:19:45.705784 kernel: loop: module loaded Mar 7 01:19:45.705798 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:19:45.705815 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:19:45.705882 systemd-journald[1147]: Collecting audit messages is disabled. Mar 7 01:19:45.705925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:19:45.705949 systemd-journald[1147]: Journal started Mar 7 01:19:45.705980 systemd-journald[1147]: Runtime Journal (/run/log/journal/a8ac1047cf684690b6c55a10cae4cbd4) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:19:42.791640 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:19:42.853898 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:19:42.854996 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:19:42.855647 systemd[1]: systemd-journald.service: Consumed 2.271s CPU time. Mar 7 01:19:45.732720 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:19:45.732807 systemd[1]: Stopped verity-setup.service. Mar 7 01:19:45.754195 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:45.769247 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:19:45.771646 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:19:45.780672 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:19:45.789779 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:19:45.797504 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:19:45.803319 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:19:45.810999 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:19:45.848935 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:19:45.898619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:19:45.929789 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:19:45.930471 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:19:45.956730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:19:45.958937 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:19:45.998973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:19:45.999543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:19:46.019866 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:19:46.020508 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:19:46.064667 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:19:46.065492 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:19:46.073768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:19:46.083002 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:19:46.095217 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:19:46.117653 kernel: ACPI: bus type drm_connector registered Mar 7 01:19:46.120515 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:19:46.121016 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:19:46.178682 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:19:46.244036 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:19:46.309381 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:19:46.343236 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:19:46.343925 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:19:46.371706 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:19:46.412746 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:19:46.466869 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:19:46.484699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:19:46.532000 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:19:46.687503 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:19:46.716757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:19:46.728969 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:19:46.750747 systemd-journald[1147]: Time spent on flushing to /var/log/journal/a8ac1047cf684690b6c55a10cae4cbd4 is 1.237890s for 943 entries. Mar 7 01:19:46.750747 systemd-journald[1147]: System Journal (/var/log/journal/a8ac1047cf684690b6c55a10cae4cbd4) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:19:48.499940 systemd-journald[1147]: Received client request to flush runtime journal. Mar 7 01:19:48.500042 kernel: loop0: detected capacity change from 0 to 140768 Mar 7 01:19:48.500174 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:19:46.773860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:19:46.802396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:19:46.907152 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:19:47.000881 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:19:47.028384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:19:47.497610 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:19:47.536278 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:19:47.641909 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:19:47.723334 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:19:48.032955 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:19:48.090138 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:19:48.151902 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:19:48.526335 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:19:48.615933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:19:48.652275 kernel: loop1: detected capacity change from 0 to 142488 Mar 7 01:19:48.672206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:19:48.676313 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:19:48.685186 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:19:48.715515 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Mar 7 01:19:48.715658 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Mar 7 01:19:48.850616 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:19:48.912133 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:19:49.789148 kernel: loop2: detected capacity change from 0 to 219192 Mar 7 01:19:50.224752 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:19:50.297988 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:19:50.328857 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 01:19:50.496847 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 01:19:50.578267 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Mar 7 01:19:50.578297 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Mar 7 01:19:50.609495 kernel: loop5: detected capacity change from 0 to 219192 Mar 7 01:19:50.660976 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:19:50.696569 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:19:50.704983 (sd-merge)[1203]: Merged extensions into '/usr'. Mar 7 01:19:50.758633 systemd[1]: Reloading requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:19:50.758657 systemd[1]: Reloading... Mar 7 01:19:52.878776 zram_generator::config[1237]: No configuration found. Mar 7 01:19:53.759786 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:19:53.847742 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:19:54.396119 systemd[1]: Reloading finished in 3636 ms. Mar 7 01:19:54.498867 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:19:54.520398 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:19:54.752884 systemd[1]: Starting ensure-sysext.service... Mar 7 01:19:54.787002 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:19:54.826714 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:19:54.826771 systemd[1]: Reloading... Mar 7 01:19:55.000258 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:19:55.003290 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:19:55.022784 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:19:55.023442 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Mar 7 01:19:55.038823 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Mar 7 01:19:55.050855 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:19:55.051921 systemd-tmpfiles[1269]: Skipping /boot Mar 7 01:19:55.106817 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:19:55.106842 systemd-tmpfiles[1269]: Skipping /boot Mar 7 01:19:55.119176 zram_generator::config[1293]: No configuration found. Mar 7 01:19:55.893965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:19:56.038869 systemd[1]: Reloading finished in 1211 ms. Mar 7 01:19:56.095433 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:19:56.146679 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:19:56.202278 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:19:56.230744 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:19:56.255941 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:19:56.282747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:19:56.327233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:19:56.384768 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:19:56.439506 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:19:56.457868 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:56.462774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:19:56.475911 augenrules[1356]: No rules Mar 7 01:19:56.490714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:19:56.518644 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:19:56.534919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:19:56.552683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:19:56.553031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:56.554938 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:19:56.565528 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Mar 7 01:19:56.568424 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:19:56.576311 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:19:56.576610 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:19:56.588646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:19:56.588951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:19:56.599592 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:19:56.600016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:19:56.642780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:19:56.643790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:19:56.674009 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:19:56.688646 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:19:56.706048 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:19:56.729648 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:19:56.765011 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:19:56.792645 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:19:56.914889 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:56.915315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:19:56.941053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:19:56.991380 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:19:57.002280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:19:57.045907 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:19:57.053309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:19:57.204884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:19:57.217833 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:19:57.217931 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:57.222502 systemd[1]: Finished ensure-sysext.service. Mar 7 01:19:57.246929 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:19:57.247269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:19:57.264208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:19:57.264614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:19:57.319413 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:19:57.360307 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:19:57.415147 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:19:57.415805 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:19:57.729003 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:19:57.893595 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:19:57.894219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:19:58.054673 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:19:58.121784 systemd-resolved[1344]: Positive Trust Anchors: Mar 7 01:19:58.122365 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:19:58.122714 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:19:58.401938 systemd-resolved[1344]: Defaulting to hostname 'linux'. Mar 7 01:19:58.450961 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:19:58.471892 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:19:58.508189 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1381) Mar 7 01:19:58.543153 systemd-networkd[1401]: lo: Link UP Mar 7 01:19:58.543199 systemd-networkd[1401]: lo: Gained carrier Mar 7 01:19:58.560382 systemd-networkd[1401]: Enumeration completed Mar 7 01:19:58.569871 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:19:58.592984 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:19:58.592998 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:19:58.600430 systemd[1]: Reached target network.target - Network. Mar 7 01:19:58.643362 systemd-networkd[1401]: eth0: Link UP Mar 7 01:19:58.660818 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:19:58.660863 systemd-networkd[1401]: eth0: Gained carrier Mar 7 01:19:58.660898 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:19:58.785619 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:19:58.818043 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:19:58.993378 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:19:58.994640 systemd-timesyncd[1410]: Initial clock synchronization to Sat 2026-03-07 01:19:59.137511 UTC. Mar 7 01:19:59.021002 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:19:59.100765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:19:59.219877 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:19:59.276656 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:19:59.304477 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:19:59.482337 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:19:59.605560 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:19:59.606148 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:19:59.606629 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:19:59.694639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:19:59.711712 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:19:59.833158 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:20:00.085757 systemd-networkd[1401]: eth0: Gained IPv6LL Mar 7 01:20:00.138424 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:20:00.200967 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:20:01.570810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:20:02.469352 kernel: kvm_amd: TSC scaling supported Mar 7 01:20:02.469545 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:20:02.476011 kernel: kvm_amd: Nested Paging enabled Mar 7 01:20:02.477268 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:20:02.483677 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:20:03.146635 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:20:03.438804 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:20:03.480391 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:20:03.609357 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:20:03.846011 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:20:03.864604 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:20:03.881910 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:20:03.908213 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:20:03.927860 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:20:03.953557 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:20:03.971356 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:20:03.988823 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:20:04.018772 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:20:04.018827 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:20:04.028438 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:20:04.070247 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:20:04.098063 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:20:04.214550 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:20:04.277509 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:20:04.314589 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:20:04.314871 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:20:04.339277 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:20:04.355547 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:20:04.370147 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:20:04.370227 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:20:04.400554 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:20:04.441747 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:20:04.478449 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:20:04.517348 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:20:04.556544 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:20:04.575189 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:20:04.577830 jq[1441]: false Mar 7 01:20:04.604587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:20:04.660246 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:20:04.686589 dbus-daemon[1440]: [system] SELinux support is enabled Mar 7 01:20:04.733997 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:20:04.735685 extend-filesystems[1442]: Found loop3 Mar 7 01:20:04.735685 extend-filesystems[1442]: Found loop4 Mar 7 01:20:04.735685 extend-filesystems[1442]: Found loop5 Mar 7 01:20:04.735685 extend-filesystems[1442]: Found sr0 Mar 7 01:20:04.735685 extend-filesystems[1442]: Found vda Mar 7 01:20:04.735685 extend-filesystems[1442]: Found vda1 Mar 7 01:20:04.735685 extend-filesystems[1442]: Found vda2 Mar 7 01:20:04.735685 extend-filesystems[1442]: Found vda3 Mar 7 01:20:04.735685 extend-filesystems[1442]: Found usr Mar 7 01:20:04.735685 extend-filesystems[1442]: Found vda4 Mar 7 01:20:04.735685 extend-filesystems[1442]: Found vda6 Mar 7 01:20:04.930295 extend-filesystems[1442]: Found vda7 Mar 7 01:20:04.930295 extend-filesystems[1442]: Found vda9 Mar 7 01:20:04.930295 extend-filesystems[1442]: Checking size of /dev/vda9 Mar 7 01:20:04.909918 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:20:05.040293 extend-filesystems[1442]: Resized partition /dev/vda9 Mar 7 01:20:05.059490 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:20:05.142763 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1381) Mar 7 01:20:05.142908 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:20:05.103318 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:20:05.168844 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:20:05.331160 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:20:05.338453 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:20:05.348673 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:20:05.369351 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:20:05.394377 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:20:05.419558 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:20:05.443055 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:20:05.443055 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:20:05.443055 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:20:05.505029 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Mar 7 01:20:05.476711 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:20:05.557035 jq[1472]: true Mar 7 01:20:05.642677 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:20:05.653485 systemd-logind[1464]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:20:05.655441 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:20:05.661554 update_engine[1470]: I20260307 01:20:05.661162 1470 main.cc:92] Flatcar Update Engine starting Mar 7 01:20:05.673442 update_engine[1470]: I20260307 01:20:05.668301 1470 update_check_scheduler.cc:74] Next update check in 9m35s Mar 7 01:20:05.672892 systemd-logind[1464]: New seat seat0. Mar 7 01:20:05.686841 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:20:05.781051 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:20:05.784245 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:20:05.784924 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:20:05.788530 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:20:05.831742 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:20:05.832280 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:20:05.901583 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:20:05.955442 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:20:05.958962 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:20:06.032916 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:20:06.096870 jq[1477]: true Mar 7 01:20:06.142045 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:20:06.142749 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:20:06.272331 tar[1476]: linux-amd64/LICENSE Mar 7 01:20:06.272331 tar[1476]: linux-amd64/helm Mar 7 01:20:06.273439 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:20:06.303446 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:20:06.315392 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:20:06.333928 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:20:06.334338 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:20:06.341384 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:20:06.341636 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:20:06.392484 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:20:06.918303 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:20:07.097321 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:20:07.369254 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:20:07.393178 bash[1511]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:20:07.400053 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:20:07.417166 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:20:07.453716 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:20:07.947687 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:20:07.949233 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:20:08.379234 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:20:09.567457 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:20:09.819956 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:20:09.914478 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:20:09.938509 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:20:10.029512 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:20:10.267010 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:59052.service - OpenSSH per-connection server daemon (10.0.0.1:59052). Mar 7 01:20:10.821541 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 59052 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:20:10.941796 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:11.068251 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:20:11.257416 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:20:11.314475 systemd-logind[1464]: New session 1 of user core. Mar 7 01:20:11.692447 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:20:11.830549 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:20:12.918802 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:20:13.401959 containerd[1478]: time="2026-03-07T01:20:13.387531788Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:20:13.643892 containerd[1478]: time="2026-03-07T01:20:13.630615092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:20:13.665441 containerd[1478]: time="2026-03-07T01:20:13.664276764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:20:13.665441 containerd[1478]: time="2026-03-07T01:20:13.664429925Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:20:13.665441 containerd[1478]: time="2026-03-07T01:20:13.664586149Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.666520618Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.666562214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.666863008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.667011406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.667559424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.667586832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.667693866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.667714141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.668269051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.668760623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:20:13.672224 containerd[1478]: time="2026-03-07T01:20:13.669415423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:20:13.677584 containerd[1478]: time="2026-03-07T01:20:13.669441084Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:20:13.677584 containerd[1478]: time="2026-03-07T01:20:13.669673969Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:20:13.677584 containerd[1478]: time="2026-03-07T01:20:13.669871498Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:20:14.151708 containerd[1478]: time="2026-03-07T01:20:14.149852554Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:20:14.177003 containerd[1478]: time="2026-03-07T01:20:14.157879668Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:20:14.177003 containerd[1478]: time="2026-03-07T01:20:14.158047931Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:20:14.181606 containerd[1478]: time="2026-03-07T01:20:14.158208820Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:20:14.181606 containerd[1478]: time="2026-03-07T01:20:14.178770598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:20:14.184624 containerd[1478]: time="2026-03-07T01:20:14.183384526Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.219849919Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.220943395Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.221045198Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.221156834Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.221230885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.221348044Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.221437022Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.221470518Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.221502547Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:20:14.232559 containerd[1478]: time="2026-03-07T01:20:14.221530187Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.627864300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.628161584Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.628394989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.628464612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.628489661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.628556321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.629647116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.629687702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.629733270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.629777061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.629807262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.629845388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.629876091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.630443 containerd[1478]: time="2026-03-07T01:20:14.629905428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.636976 containerd[1478]: time="2026-03-07T01:20:14.636928688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.638271 containerd[1478]: time="2026-03-07T01:20:14.638231546Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:20:14.642802 containerd[1478]: time="2026-03-07T01:20:14.642754328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.642999383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.643036194Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.643331709Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.643373943Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.643508639Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.643541201Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.643558154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.643583283Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.643721906Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:20:14.655873 containerd[1478]: time="2026-03-07T01:20:14.643746071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:20:14.683649 containerd[1478]: time="2026-03-07T01:20:14.683348267Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:20:14.706922 containerd[1478]: time="2026-03-07T01:20:14.697138562Z" level=info msg="Connect containerd service" Mar 7 01:20:14.706922 containerd[1478]: time="2026-03-07T01:20:14.697327454Z" level=info msg="using legacy CRI server" Mar 7 01:20:14.706922 containerd[1478]: time="2026-03-07T01:20:14.697348786Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:20:14.706922 containerd[1478]: time="2026-03-07T01:20:14.702934955Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:20:14.740046 containerd[1478]: time="2026-03-07T01:20:14.724764997Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:20:14.740046 containerd[1478]: time="2026-03-07T01:20:14.735972686Z" level=info msg="Start subscribing containerd event" Mar 7 01:20:14.740046 containerd[1478]: time="2026-03-07T01:20:14.736310786Z" level=info msg="Start recovering state" Mar 7 01:20:14.740046 containerd[1478]: time="2026-03-07T01:20:14.736587128Z" level=info msg="Start event monitor" Mar 7 01:20:14.740046 containerd[1478]: time="2026-03-07T01:20:14.736676557Z" level=info msg="Start snapshots syncer" Mar 7 01:20:14.740046 containerd[1478]: time="2026-03-07T01:20:14.736781142Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:20:14.740046 containerd[1478]: time="2026-03-07T01:20:14.736794791Z" level=info msg="Start streaming server" Mar 7 01:20:14.901761 containerd[1478]: time="2026-03-07T01:20:14.899964707Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:20:14.901761 containerd[1478]: time="2026-03-07T01:20:14.900730780Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:20:14.907810 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:20:14.917274 containerd[1478]: time="2026-03-07T01:20:14.907904794Z" level=info msg="containerd successfully booted in 1.555312s" Mar 7 01:20:15.653664 systemd[1545]: Queued start job for default target default.target. Mar 7 01:20:15.723730 systemd[1545]: Created slice app.slice - User Application Slice. Mar 7 01:20:15.723780 systemd[1545]: Reached target paths.target - Paths. Mar 7 01:20:15.723806 systemd[1545]: Reached target timers.target - Timers. Mar 7 01:20:15.780050 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:20:16.648765 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:20:16.651291 systemd[1545]: Reached target sockets.target - Sockets. Mar 7 01:20:16.651328 systemd[1545]: Reached target basic.target - Basic System. Mar 7 01:20:16.651585 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:20:16.658228 systemd[1545]: Reached target default.target - Main User Target. Mar 7 01:20:16.658313 systemd[1545]: Startup finished in 3.316s. Mar 7 01:20:16.795742 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:20:16.987046 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:51044.service - OpenSSH per-connection server daemon (10.0.0.1:51044). Mar 7 01:20:18.026690 tar[1476]: linux-amd64/README.md Mar 7 01:20:18.451821 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 51044 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:20:18.536466 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:19.127258 systemd-logind[1464]: New session 2 of user core. Mar 7 01:20:19.148201 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:20:19.165242 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:20:19.526848 sshd[1564]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:19.651019 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:51054.service - OpenSSH per-connection server daemon (10.0.0.1:51054). Mar 7 01:20:19.652262 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:51044.service: Deactivated successfully. Mar 7 01:20:19.675906 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:20:19.687458 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:20:19.728740 systemd-logind[1464]: Removed session 2. Mar 7 01:20:19.812553 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 51054 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:20:19.828702 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:19.964242 systemd-logind[1464]: New session 3 of user core. Mar 7 01:20:20.092001 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:20:20.415894 sshd[1572]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:20.428301 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:51054.service: Deactivated successfully. Mar 7 01:20:20.436891 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:20:20.442240 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:20:20.444894 systemd-logind[1464]: Removed session 3. Mar 7 01:20:25.694413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:25.704715 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:20:25.708893 systemd[1]: Startup finished in 9.269s (kernel) + 45.837s (initrd) + 46.301s (userspace) = 1min 41.407s. Mar 7 01:20:25.718904 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:20:30.639535 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:55588.service - OpenSSH per-connection server daemon (10.0.0.1:55588). Mar 7 01:20:31.278270 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 55588 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:20:31.300173 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:31.340695 systemd-logind[1464]: New session 4 of user core. Mar 7 01:20:31.358121 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:20:32.266031 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:32.343749 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:55588.service: Deactivated successfully. Mar 7 01:20:32.358460 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:20:32.375250 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:20:32.466422 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:55604.service - OpenSSH per-connection server daemon (10.0.0.1:55604). Mar 7 01:20:32.473282 systemd-logind[1464]: Removed session 4. Mar 7 01:20:32.749679 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 55604 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:20:32.797349 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:32.852865 systemd-logind[1464]: New session 5 of user core. Mar 7 01:20:32.873600 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:20:33.199903 sshd[1600]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:33.239366 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:55604.service: Deactivated successfully. Mar 7 01:20:33.255443 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:20:33.260502 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:20:33.310452 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:55616.service - OpenSSH per-connection server daemon (10.0.0.1:55616). Mar 7 01:20:33.319317 systemd-logind[1464]: Removed session 5. Mar 7 01:20:33.610310 kubelet[1584]: E0307 01:20:33.609118 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:20:33.625033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:20:33.630756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:20:33.634295 systemd[1]: kubelet.service: Consumed 10.490s CPU time. Mar 7 01:20:33.681556 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 55616 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:20:33.686370 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:33.725315 systemd-logind[1464]: New session 6 of user core. Mar 7 01:20:33.746167 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:20:34.015349 sshd[1608]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:34.094528 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:55630.service - OpenSSH per-connection server daemon (10.0.0.1:55630). Mar 7 01:20:34.095789 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:55616.service: Deactivated successfully. Mar 7 01:20:34.105691 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:20:34.114855 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:20:34.127650 systemd-logind[1464]: Removed session 6. Mar 7 01:20:34.201511 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 55630 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:20:34.207944 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:34.233042 systemd-logind[1464]: New session 7 of user core. Mar 7 01:20:34.243199 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:20:34.494894 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:20:34.497284 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:20:39.992565 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:20:40.006022 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:20:43.853712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:20:44.183375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:20:48.117804 dockerd[1637]: time="2026-03-07T01:20:48.112353412Z" level=info msg="Starting up" Mar 7 01:20:51.137483 systemd[1]: var-lib-docker-metacopy\x2dcheck1703850969-merged.mount: Deactivated successfully. Mar 7 01:20:51.394215 update_engine[1470]: I20260307 01:20:51.376243 1470 update_attempter.cc:509] Updating boot flags... Mar 7 01:20:51.708417 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:20:51.709262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:51.892732 dockerd[1637]: time="2026-03-07T01:20:51.892296364Z" level=info msg="Loading containers: start." Mar 7 01:20:52.148146 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1683) Mar 7 01:20:52.601802 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1682) Mar 7 01:20:52.738585 kubelet[1674]: E0307 01:20:52.729412 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:20:52.790214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:20:52.794688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:20:52.795196 systemd[1]: kubelet.service: Consumed 3.618s CPU time. Mar 7 01:20:52.912579 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1682) Mar 7 01:20:54.057680 kernel: Initializing XFRM netlink socket Mar 7 01:20:54.814577 systemd-networkd[1401]: docker0: Link UP Mar 7 01:20:54.903607 dockerd[1637]: time="2026-03-07T01:20:54.899896830Z" level=info msg="Loading containers: done." Mar 7 01:20:55.055748 dockerd[1637]: time="2026-03-07T01:20:55.051977992Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:20:55.055748 dockerd[1637]: time="2026-03-07T01:20:55.054037304Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:20:55.059744 dockerd[1637]: time="2026-03-07T01:20:55.057917806Z" level=info msg="Daemon has completed initialization" Mar 7 01:20:55.436784 dockerd[1637]: time="2026-03-07T01:20:55.433618424Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:20:55.441451 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:21:02.896437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:21:03.049862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:21:05.763716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:21:05.792956 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:21:06.407198 kubelet[1821]: E0307 01:21:06.406667 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:21:06.424455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:21:06.427314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:21:06.432533 systemd[1]: kubelet.service: Consumed 1.908s CPU time. Mar 7 01:21:11.631851 containerd[1478]: time="2026-03-07T01:21:11.629829880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 01:21:13.177577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027386038.mount: Deactivated successfully. Mar 7 01:21:16.702996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:21:17.186135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:21:19.904026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:21:19.911872 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:21:20.870380 kubelet[1871]: E0307 01:21:20.868790 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:21:20.940275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:21:20.940558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:21:20.943871 systemd[1]: kubelet.service: Consumed 1.930s CPU time. Mar 7 01:21:31.402690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:21:31.487357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:21:31.593016 containerd[1478]: time="2026-03-07T01:21:31.588818730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:32.350287 containerd[1478]: time="2026-03-07T01:21:31.602684876Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 7 01:21:32.357460 containerd[1478]: time="2026-03-07T01:21:31.913623537Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:32.447028 containerd[1478]: time="2026-03-07T01:21:32.446960894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:32.471316 containerd[1478]: time="2026-03-07T01:21:32.471194869Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 20.840879144s" Mar 7 01:21:32.481515 containerd[1478]: time="2026-03-07T01:21:32.478120231Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 01:21:32.504421 containerd[1478]: time="2026-03-07T01:21:32.504188346Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 01:21:34.895623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:21:34.921902 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:21:35.987589 kubelet[1917]: E0307 01:21:35.985600 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:21:36.014686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:21:36.015698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:21:36.025437 systemd[1]: kubelet.service: Consumed 1.407s CPU time. Mar 7 01:21:46.419550 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:21:46.488639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:21:49.635569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:21:49.788777 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:21:51.342489 kubelet[1938]: E0307 01:21:51.340966 1938 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:21:51.349414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:21:51.349724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:21:51.353784 systemd[1]: kubelet.service: Consumed 1.864s CPU time. Mar 7 01:21:51.750582 containerd[1478]: time="2026-03-07T01:21:51.744970637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:51.784733 containerd[1478]: time="2026-03-07T01:21:51.783546566Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 7 01:21:51.811512 containerd[1478]: time="2026-03-07T01:21:51.809390178Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:51.847168 containerd[1478]: time="2026-03-07T01:21:51.846257695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:51.851902 containerd[1478]: time="2026-03-07T01:21:51.849534181Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 19.345209364s" Mar 7 01:21:51.851902 containerd[1478]: time="2026-03-07T01:21:51.849738223Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 01:21:51.879822 containerd[1478]: time="2026-03-07T01:21:51.876154544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 01:22:00.343595 containerd[1478]: time="2026-03-07T01:22:00.337754758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:00.343595 containerd[1478]: time="2026-03-07T01:22:00.346627908Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 7 01:22:00.373584 containerd[1478]: time="2026-03-07T01:22:00.365538391Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:00.386454 containerd[1478]: time="2026-03-07T01:22:00.385797638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:00.393293 containerd[1478]: time="2026-03-07T01:22:00.389154457Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 8.512844351s" Mar 7 01:22:00.393293 containerd[1478]: time="2026-03-07T01:22:00.392535019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 01:22:00.423350 containerd[1478]: time="2026-03-07T01:22:00.422305344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 01:22:01.601303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:22:01.619986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:04.790270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:04.913184 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:22:07.555474 kubelet[1964]: E0307 01:22:07.553282 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:22:07.584926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:22:07.587004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:22:07.590542 systemd[1]: kubelet.service: Consumed 2.264s CPU time. Mar 7 01:22:10.534323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702029325.mount: Deactivated successfully. Mar 7 01:22:17.879732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 01:22:17.943812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:21.290380 containerd[1478]: time="2026-03-07T01:22:21.290176291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:21.303759 containerd[1478]: time="2026-03-07T01:22:21.293808400Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 7 01:22:21.304241 containerd[1478]: time="2026-03-07T01:22:21.304035091Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:21.452323 containerd[1478]: time="2026-03-07T01:22:21.449653584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:21.464030 containerd[1478]: time="2026-03-07T01:22:21.462328702Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 21.039946125s" Mar 7 01:22:21.464030 containerd[1478]: time="2026-03-07T01:22:21.462553781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 01:22:21.522393 containerd[1478]: time="2026-03-07T01:22:21.517167663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 01:22:23.683172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:23.758212 (kubelet)[1985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:22:24.590471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264545226.mount: Deactivated successfully. Mar 7 01:22:24.610564 kubelet[1985]: E0307 01:22:24.609474 1985 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:22:24.689849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:22:24.691681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:22:24.702274 systemd[1]: kubelet.service: Consumed 2.218s CPU time. Mar 7 01:22:34.851598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 7 01:22:34.938832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:38.700640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:38.708250 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:22:39.667969 kubelet[2053]: E0307 01:22:39.667716 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:22:39.691473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:22:39.691932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:22:39.707393 systemd[1]: kubelet.service: Consumed 1.590s CPU time. Mar 7 01:22:45.931609 containerd[1478]: time="2026-03-07T01:22:45.921664689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:45.940359 containerd[1478]: time="2026-03-07T01:22:45.940153572Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 7 01:22:45.955436 containerd[1478]: time="2026-03-07T01:22:45.953031972Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:45.990473 containerd[1478]: time="2026-03-07T01:22:45.983384068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:45.996679 containerd[1478]: time="2026-03-07T01:22:45.995479394Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 24.478155902s" Mar 7 01:22:45.996679 containerd[1478]: time="2026-03-07T01:22:45.995749479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 01:22:46.023802 containerd[1478]: time="2026-03-07T01:22:46.023739248Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:22:47.197642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728272607.mount: Deactivated successfully. Mar 7 01:22:47.239737 containerd[1478]: time="2026-03-07T01:22:47.237999483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:47.245182 containerd[1478]: time="2026-03-07T01:22:47.244837523Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 01:22:47.257469 containerd[1478]: time="2026-03-07T01:22:47.255964250Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:47.277698 containerd[1478]: time="2026-03-07T01:22:47.277537758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:47.283752 containerd[1478]: time="2026-03-07T01:22:47.283589595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.259559072s" Mar 7 01:22:47.283932 containerd[1478]: time="2026-03-07T01:22:47.283802159Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:22:47.295044 containerd[1478]: time="2026-03-07T01:22:47.294424946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 01:22:48.512653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32661856.mount: Deactivated successfully. Mar 7 01:22:49.932404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 7 01:22:50.086913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:51.665149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:51.809831 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:22:52.750739 kubelet[2087]: E0307 01:22:52.749919 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:22:52.769133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:22:52.769591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:22:52.780404 systemd[1]: kubelet.service: Consumed 1.040s CPU time. Mar 7 01:23:02.770037 containerd[1478]: time="2026-03-07T01:23:02.767987556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:02.782954 containerd[1478]: time="2026-03-07T01:23:02.782799672Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 7 01:23:02.787148 containerd[1478]: time="2026-03-07T01:23:02.786716650Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:02.806390 containerd[1478]: time="2026-03-07T01:23:02.803838722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:02.814670 containerd[1478]: time="2026-03-07T01:23:02.811697573Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 15.517209677s" Mar 7 01:23:02.814670 containerd[1478]: time="2026-03-07T01:23:02.812246071Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 01:23:02.841387 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 7 01:23:02.925284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:23:04.060677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:23:04.063229 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:23:04.882579 kubelet[2167]: E0307 01:23:04.881266 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:23:04.896583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:23:04.897862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:23:14.903399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 7 01:23:14.948043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:23:14.963996 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:23:14.964269 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:23:14.966919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:23:15.407914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:23:15.592893 systemd[1]: Reloading requested from client PID 2197 ('systemctl') (unit session-7.scope)... Mar 7 01:23:15.592955 systemd[1]: Reloading... Mar 7 01:23:15.985824 zram_generator::config[2242]: No configuration found. Mar 7 01:23:17.136341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:23:17.469524 systemd[1]: Reloading finished in 1875 ms. Mar 7 01:23:17.765700 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:23:17.766245 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:23:17.783279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:23:18.462487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:23:18.498604 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:23:18.987396 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:23:18.991754 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:23:18.991754 kubelet[2284]: I0307 01:23:18.989206 2284 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:23:19.587485 kubelet[2284]: I0307 01:23:19.587347 2284 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:23:19.587485 kubelet[2284]: I0307 01:23:19.587427 2284 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:23:19.587485 kubelet[2284]: I0307 01:23:19.587484 2284 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:23:19.587485 kubelet[2284]: I0307 01:23:19.587501 2284 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:23:19.593671 kubelet[2284]: I0307 01:23:19.593029 2284 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:23:19.636160 kubelet[2284]: I0307 01:23:19.635956 2284 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:23:19.681671 kubelet[2284]: E0307 01:23:19.681005 2284 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:23:19.731722 kubelet[2284]: E0307 01:23:19.728001 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:23:19.731722 kubelet[2284]: I0307 01:23:19.728173 2284 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:23:19.787036 kubelet[2284]: I0307 01:23:19.786264 2284 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:23:19.794054 kubelet[2284]: I0307 01:23:19.790971 2284 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:23:19.794054 kubelet[2284]: I0307 01:23:19.791420 2284 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:23:19.795880 kubelet[2284]: I0307 01:23:19.794993 2284 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:23:19.799679 kubelet[2284]: I0307 01:23:19.796852 2284 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:23:19.799679 kubelet[2284]: I0307 01:23:19.797441 2284 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:23:19.811853 kubelet[2284]: I0307 01:23:19.808508 2284 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:23:19.811853 kubelet[2284]: I0307 01:23:19.808973 2284 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:23:19.811853 kubelet[2284]: I0307 01:23:19.809047 2284 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:23:19.811853 kubelet[2284]: I0307 01:23:19.809168 2284 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:23:19.811853 kubelet[2284]: I0307 01:23:19.809194 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:23:19.874837 kubelet[2284]: E0307 01:23:19.848672 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:23:19.874837 kubelet[2284]: E0307 01:23:19.843388 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:23:19.943882 kubelet[2284]: I0307 01:23:19.943801 2284 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:23:19.953601 kubelet[2284]: I0307 01:23:19.947644 2284 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:23:19.953601 kubelet[2284]: I0307 01:23:19.947707 2284 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:23:19.953601 kubelet[2284]: W0307 01:23:19.948152 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:23:19.998320 kubelet[2284]: I0307 01:23:19.998238 2284 server.go:1262] "Started kubelet" Mar 7 01:23:20.002699 kubelet[2284]: I0307 01:23:20.001678 2284 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:23:20.002699 kubelet[2284]: I0307 01:23:20.001822 2284 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:23:20.002699 kubelet[2284]: I0307 01:23:20.002694 2284 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:23:20.002904 kubelet[2284]: I0307 01:23:20.002852 2284 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:23:20.005670 kubelet[2284]: I0307 01:23:20.005056 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:23:20.034158 kubelet[2284]: I0307 01:23:20.017805 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:23:20.034158 kubelet[2284]: I0307 01:23:20.027321 2284 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:23:20.034158 kubelet[2284]: I0307 01:23:20.028181 2284 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:23:20.034158 kubelet[2284]: E0307 01:23:20.028485 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:20.034158 kubelet[2284]: I0307 01:23:20.029907 2284 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:23:20.034158 kubelet[2284]: I0307 01:23:20.029976 2284 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:23:20.034158 kubelet[2284]: E0307 01:23:20.030650 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:23:20.041223 kubelet[2284]: E0307 01:23:20.030787 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Mar 7 01:23:20.041223 kubelet[2284]: I0307 01:23:20.035986 2284 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:23:20.041223 kubelet[2284]: I0307 01:23:20.036198 2284 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:23:20.054454 kubelet[2284]: E0307 01:23:20.053239 2284 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:23:20.054454 kubelet[2284]: I0307 01:23:20.053463 2284 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:23:20.074170 kubelet[2284]: E0307 01:23:20.059368 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6a96fa2691b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:23:19.998181809 +0000 UTC m=+1.438283161,LastTimestamp:2026-03-07 01:23:19.998181809 +0000 UTC m=+1.438283161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:23:20.239593 kubelet[2284]: E0307 01:23:20.232323 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:20.253698 kubelet[2284]: E0307 01:23:20.252749 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Mar 7 01:23:20.340821 kubelet[2284]: E0307 01:23:20.337518 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:20.386260 kubelet[2284]: I0307 01:23:20.382637 2284 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:23:20.386260 kubelet[2284]: I0307 01:23:20.382665 2284 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:23:20.386260 kubelet[2284]: I0307 01:23:20.382695 2284 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:23:20.410176 kubelet[2284]: I0307 01:23:20.410132 2284 policy_none.go:49] "None policy: Start" Mar 7 01:23:20.410416 kubelet[2284]: I0307 01:23:20.410391 2284 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:23:20.410627 kubelet[2284]: I0307 01:23:20.410601 2284 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:23:20.418269 kubelet[2284]: I0307 01:23:20.415496 2284 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:23:20.446503 kubelet[2284]: E0307 01:23:20.445754 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:20.452830 kubelet[2284]: I0307 01:23:20.450503 2284 policy_none.go:47] "Start" Mar 7 01:23:20.460304 kubelet[2284]: I0307 01:23:20.452380 2284 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:23:20.460304 kubelet[2284]: I0307 01:23:20.453586 2284 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:23:20.460304 kubelet[2284]: I0307 01:23:20.453770 2284 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:23:20.494427 kubelet[2284]: E0307 01:23:20.453962 2284 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:23:20.518930 kubelet[2284]: E0307 01:23:20.495247 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:23:20.549673 kubelet[2284]: E0307 01:23:20.548257 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:20.623010 kubelet[2284]: E0307 01:23:20.620400 2284 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:23:20.665270 kubelet[2284]: E0307 01:23:20.662942 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:20.676959 kubelet[2284]: E0307 01:23:20.676847 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Mar 7 01:23:20.708642 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:23:20.807460 kubelet[2284]: E0307 01:23:20.804986 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:20.868865 kubelet[2284]: E0307 01:23:20.868665 2284 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:23:20.928501 kubelet[2284]: E0307 01:23:20.925259 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:21.032468 kubelet[2284]: E0307 01:23:21.032050 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:21.045498 kubelet[2284]: E0307 01:23:21.039948 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:23:21.049492 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:23:21.065382 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:23:21.205688 kubelet[2284]: E0307 01:23:21.199742 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:23:21.209145 kubelet[2284]: E0307 01:23:21.208043 2284 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:23:21.222482 kubelet[2284]: I0307 01:23:21.221009 2284 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:23:21.222482 kubelet[2284]: I0307 01:23:21.221476 2284 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:23:21.222482 kubelet[2284]: I0307 01:23:21.223425 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:23:21.260194 kubelet[2284]: E0307 01:23:21.259591 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:23:21.272663 kubelet[2284]: E0307 01:23:21.272499 2284 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:23:21.272785 kubelet[2284]: E0307 01:23:21.272675 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:23:21.508445 kubelet[2284]: E0307 01:23:21.506690 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Mar 7 01:23:21.508445 kubelet[2284]: E0307 01:23:21.507037 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:23:21.508445 kubelet[2284]: I0307 01:23:21.516596 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f3d44dec3cd523b4bcc0030330b52a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f3d44dec3cd523b4bcc0030330b52a9\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:23:21.529471 kubelet[2284]: I0307 01:23:21.529428 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:23:21.697852 kubelet[2284]: E0307 01:23:21.626368 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 7 01:23:21.697852 kubelet[2284]: I0307 01:23:21.696998 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f3d44dec3cd523b4bcc0030330b52a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f3d44dec3cd523b4bcc0030330b52a9\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:23:21.697852 kubelet[2284]: I0307 01:23:21.698463 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f3d44dec3cd523b4bcc0030330b52a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7f3d44dec3cd523b4bcc0030330b52a9\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:23:21.705679 kubelet[2284]: E0307 01:23:21.705370 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:23:21.890433 kubelet[2284]: I0307 01:23:21.829567 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:23:21.890433 kubelet[2284]: I0307 01:23:21.829755 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:23:21.890433 kubelet[2284]: I0307 01:23:21.829931 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:23:21.890433 kubelet[2284]: I0307 01:23:21.829960 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:23:21.890433 kubelet[2284]: I0307 01:23:21.829986 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:23:21.912444 kubelet[2284]: E0307 01:23:21.909414 2284 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:23:21.913833 kubelet[2284]: I0307 01:23:21.913809 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:23:21.914474 kubelet[2284]: E0307 01:23:21.914442 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 7 01:23:21.925965 systemd[1]: Created slice kubepods-burstable-pod7f3d44dec3cd523b4bcc0030330b52a9.slice - libcontainer container kubepods-burstable-pod7f3d44dec3cd523b4bcc0030330b52a9.slice. Mar 7 01:23:21.931428 kubelet[2284]: I0307 01:23:21.931398 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:23:21.961411 kubelet[2284]: E0307 01:23:21.960723 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:22.009482 kubelet[2284]: E0307 01:23:22.006760 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:22.144838 containerd[1478]: time="2026-03-07T01:23:22.141001647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7f3d44dec3cd523b4bcc0030330b52a9,Namespace:kube-system,Attempt:0,}" Mar 7 01:23:22.169643 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 7 01:23:22.211920 kubelet[2284]: E0307 01:23:22.210914 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:22.229232 kubelet[2284]: E0307 01:23:22.229023 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:22.236660 containerd[1478]: time="2026-03-07T01:23:22.230975434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 7 01:23:22.247865 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 7 01:23:22.294714 kubelet[2284]: E0307 01:23:22.291355 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:22.316824 kubelet[2284]: E0307 01:23:22.316041 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:22.324039 containerd[1478]: time="2026-03-07T01:23:22.320811296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 7 01:23:22.327402 kubelet[2284]: I0307 01:23:22.327365 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:23:22.333260 kubelet[2284]: E0307 01:23:22.333205 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 7 01:23:22.997725 kubelet[2284]: E0307 01:23:22.996906 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:23:23.071278 kubelet[2284]: E0307 01:23:23.070704 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:23:23.113685 kubelet[2284]: E0307 01:23:23.110821 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="3.2s" Mar 7 01:23:23.138969 kubelet[2284]: I0307 01:23:23.138849 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:23:23.139702 kubelet[2284]: E0307 01:23:23.139443 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 7 01:23:23.797205 kubelet[2284]: E0307 01:23:23.796405 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:23:23.821711 kubelet[2284]: E0307 01:23:23.820693 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:23:24.316595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524120248.mount: Deactivated successfully. Mar 7 01:23:24.434969 containerd[1478]: time="2026-03-07T01:23:24.433988789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:23:24.454153 containerd[1478]: time="2026-03-07T01:23:24.453923326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:23:24.463684 containerd[1478]: time="2026-03-07T01:23:24.463314480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:23:24.467746 containerd[1478]: time="2026-03-07T01:23:24.467408634Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:23:24.472050 containerd[1478]: time="2026-03-07T01:23:24.471140751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:23:24.502190 containerd[1478]: time="2026-03-07T01:23:24.500243791Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:23:24.506152 containerd[1478]: time="2026-03-07T01:23:24.505935771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:23:24.515848 containerd[1478]: time="2026-03-07T01:23:24.513166158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:23:24.545517 containerd[1478]: time="2026-03-07T01:23:24.537439543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.394949381s" Mar 7 01:23:24.546372 containerd[1478]: time="2026-03-07T01:23:24.546199079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.314986365s" Mar 7 01:23:24.553981 containerd[1478]: time="2026-03-07T01:23:24.551996134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.227018774s" Mar 7 01:23:25.652944 kubelet[2284]: I0307 01:23:25.651531 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:23:25.652944 kubelet[2284]: E0307 01:23:25.652167 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6a96fa2691b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:23:19.998181809 +0000 UTC m=+1.438283161,LastTimestamp:2026-03-07 01:23:19.998181809 +0000 UTC m=+1.438283161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:23:25.652944 kubelet[2284]: E0307 01:23:25.652897 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 7 01:23:27.750556 kubelet[2284]: E0307 01:23:27.748905 2284 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:23:27.756326 kubelet[2284]: E0307 01:23:27.748912 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="6.4s" Mar 7 01:23:27.756326 kubelet[2284]: E0307 01:23:27.755388 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:23:28.904753 kubelet[2284]: E0307 01:23:28.904135 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:23:28.912616 kubelet[2284]: I0307 01:23:28.910951 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:23:28.914559 kubelet[2284]: E0307 01:23:28.913593 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 7 01:23:29.213589 kubelet[2284]: E0307 01:23:29.206481 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:23:29.324338 kubelet[2284]: E0307 01:23:29.322248 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:23:30.919672 containerd[1478]: time="2026-03-07T01:23:30.916789971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:23:30.919672 containerd[1478]: time="2026-03-07T01:23:30.918307662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:23:30.919672 containerd[1478]: time="2026-03-07T01:23:30.918407958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:23:30.919672 containerd[1478]: time="2026-03-07T01:23:30.919028152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:23:32.690177 kubelet[2284]: E0307 01:23:32.666124 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:23:32.763240 containerd[1478]: time="2026-03-07T01:23:32.759157434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:23:32.763240 containerd[1478]: time="2026-03-07T01:23:32.759245577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:23:32.763240 containerd[1478]: time="2026-03-07T01:23:32.759262036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:23:32.766005 containerd[1478]: time="2026-03-07T01:23:32.763852249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:23:32.779605 containerd[1478]: time="2026-03-07T01:23:32.766704030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:23:32.779605 containerd[1478]: time="2026-03-07T01:23:32.766774649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:23:32.779605 containerd[1478]: time="2026-03-07T01:23:32.766794456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:23:33.454558 containerd[1478]: time="2026-03-07T01:23:32.766974989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:23:34.246773 kubelet[2284]: E0307 01:23:34.246358 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="7s" Mar 7 01:23:34.331352 systemd[1]: Started cri-containerd-5c97ffc85414eb516fe8b032516e2c8dc47a61c5b0fedbc0eb5e0e330429ee96.scope - libcontainer container 5c97ffc85414eb516fe8b032516e2c8dc47a61c5b0fedbc0eb5e0e330429ee96. Mar 7 01:23:34.639577 systemd[1]: Started cri-containerd-03ea133b85d020df62183ad9adf5498cf31398b1ec04b4c8c065639457c9e8b7.scope - libcontainer container 03ea133b85d020df62183ad9adf5498cf31398b1ec04b4c8c065639457c9e8b7. Mar 7 01:23:34.731651 systemd[1]: Started cri-containerd-dc23d2bb320475b4c995b93c47461d193edf92c54f3d0d60e89704605330a1bd.scope - libcontainer container dc23d2bb320475b4c995b93c47461d193edf92c54f3d0d60e89704605330a1bd. Mar 7 01:23:34.997932 containerd[1478]: time="2026-03-07T01:23:34.997653312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c97ffc85414eb516fe8b032516e2c8dc47a61c5b0fedbc0eb5e0e330429ee96\"" Mar 7 01:23:35.006352 kubelet[2284]: E0307 01:23:35.006262 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:35.040611 containerd[1478]: time="2026-03-07T01:23:35.033214169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"03ea133b85d020df62183ad9adf5498cf31398b1ec04b4c8c065639457c9e8b7\"" Mar 7 01:23:35.040611 containerd[1478]: time="2026-03-07T01:23:35.038847453Z" level=info msg="CreateContainer within sandbox \"5c97ffc85414eb516fe8b032516e2c8dc47a61c5b0fedbc0eb5e0e330429ee96\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:23:35.040879 kubelet[2284]: E0307 01:23:35.035928 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:35.050849 containerd[1478]: time="2026-03-07T01:23:35.050702455Z" level=info msg="CreateContainer within sandbox \"03ea133b85d020df62183ad9adf5498cf31398b1ec04b4c8c065639457c9e8b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:23:35.070845 containerd[1478]: time="2026-03-07T01:23:35.070789228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7f3d44dec3cd523b4bcc0030330b52a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc23d2bb320475b4c995b93c47461d193edf92c54f3d0d60e89704605330a1bd\"" Mar 7 01:23:35.086371 kubelet[2284]: E0307 01:23:35.086160 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:35.122466 containerd[1478]: time="2026-03-07T01:23:35.116531672Z" level=info msg="CreateContainer within sandbox \"dc23d2bb320475b4c995b93c47461d193edf92c54f3d0d60e89704605330a1bd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:23:35.135034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638563241.mount: Deactivated successfully. Mar 7 01:23:35.187818 containerd[1478]: time="2026-03-07T01:23:35.186600686Z" level=info msg="CreateContainer within sandbox \"03ea133b85d020df62183ad9adf5498cf31398b1ec04b4c8c065639457c9e8b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6\"" Mar 7 01:23:35.190492 containerd[1478]: time="2026-03-07T01:23:35.189673941Z" level=info msg="StartContainer for \"71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6\"" Mar 7 01:23:35.206879 containerd[1478]: time="2026-03-07T01:23:35.206598018Z" level=info msg="CreateContainer within sandbox \"5c97ffc85414eb516fe8b032516e2c8dc47a61c5b0fedbc0eb5e0e330429ee96\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597\"" Mar 7 01:23:35.212434 containerd[1478]: time="2026-03-07T01:23:35.212051078Z" level=info msg="StartContainer for \"bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597\"" Mar 7 01:23:35.248622 containerd[1478]: time="2026-03-07T01:23:35.247892022Z" level=info msg="CreateContainer within sandbox \"dc23d2bb320475b4c995b93c47461d193edf92c54f3d0d60e89704605330a1bd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f8940a5f31f16e1a26e7ac469b4f627f7673f2aaf3db45c7bb9bc9454f99b05\"" Mar 7 01:23:35.261627 containerd[1478]: time="2026-03-07T01:23:35.261024823Z" level=info msg="StartContainer for \"5f8940a5f31f16e1a26e7ac469b4f627f7673f2aaf3db45c7bb9bc9454f99b05\"" Mar 7 01:23:35.340141 kubelet[2284]: I0307 01:23:35.337850 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:23:35.347209 kubelet[2284]: E0307 01:23:35.344775 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 7 01:23:35.436823 systemd[1]: Started cri-containerd-71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6.scope - libcontainer container 71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6. Mar 7 01:23:35.442508 systemd[1]: Started cri-containerd-bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597.scope - libcontainer container bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597. Mar 7 01:23:35.538542 systemd[1]: Started cri-containerd-5f8940a5f31f16e1a26e7ac469b4f627f7673f2aaf3db45c7bb9bc9454f99b05.scope - libcontainer container 5f8940a5f31f16e1a26e7ac469b4f627f7673f2aaf3db45c7bb9bc9454f99b05. Mar 7 01:23:40.193521 containerd[1478]: time="2026-03-07T01:23:40.189829460Z" level=error msg="get state for 71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6" error="context deadline exceeded: unknown" Mar 7 01:23:40.193521 containerd[1478]: time="2026-03-07T01:23:40.190502990Z" level=warning msg="unknown status" status=0 Mar 7 01:23:40.257453 containerd[1478]: time="2026-03-07T01:23:40.255273183Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 7 01:23:40.265616 kubelet[2284]: E0307 01:23:40.263388 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:23:40.265616 kubelet[2284]: E0307 01:23:40.263463 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:23:40.267512 kubelet[2284]: E0307 01:23:40.264326 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6a96fa2691b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:23:19.998181809 +0000 UTC m=+1.438283161,LastTimestamp:2026-03-07 01:23:19.998181809 +0000 UTC m=+1.438283161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:23:40.267512 kubelet[2284]: E0307 01:23:40.267201 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:23:40.331735 kubelet[2284]: E0307 01:23:40.326310 2284 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:23:41.255609 kubelet[2284]: E0307 01:23:41.249683 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:23:41.255609 kubelet[2284]: E0307 01:23:41.250536 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="7s" Mar 7 01:23:41.370355 containerd[1478]: time="2026-03-07T01:23:41.369754255Z" level=info msg="StartContainer for \"71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6\" returns successfully" Mar 7 01:23:41.437296 containerd[1478]: time="2026-03-07T01:23:41.437244244Z" level=info msg="StartContainer for \"5f8940a5f31f16e1a26e7ac469b4f627f7673f2aaf3db45c7bb9bc9454f99b05\" returns successfully" Mar 7 01:23:41.456324 containerd[1478]: time="2026-03-07T01:23:41.450775789Z" level=info msg="StartContainer for \"bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597\" returns successfully" Mar 7 01:23:43.925553 kubelet[2284]: E0307 01:23:43.924397 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:23:44.161493 kubelet[2284]: I0307 01:23:44.158664 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:23:44.829843 kubelet[2284]: E0307 01:23:44.823300 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 7 01:23:45.454360 kubelet[2284]: E0307 01:23:45.448443 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:45.454360 kubelet[2284]: E0307 01:23:45.448981 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:45.458776 kubelet[2284]: E0307 01:23:45.458024 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:45.459216 kubelet[2284]: E0307 01:23:45.459033 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:45.508201 kubelet[2284]: E0307 01:23:45.507539 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:45.508201 kubelet[2284]: E0307 01:23:45.507844 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:46.695608 kubelet[2284]: E0307 01:23:46.693822 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:46.695608 kubelet[2284]: E0307 01:23:46.694312 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:46.695608 kubelet[2284]: E0307 01:23:46.694968 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:46.706772 kubelet[2284]: E0307 01:23:46.702323 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:46.722216 kubelet[2284]: E0307 01:23:46.715180 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:46.722216 kubelet[2284]: E0307 01:23:46.721896 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:47.746145 kubelet[2284]: E0307 01:23:47.743673 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:47.746145 kubelet[2284]: E0307 01:23:47.744133 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:47.753193 kubelet[2284]: E0307 01:23:47.750045 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:47.753193 kubelet[2284]: E0307 01:23:47.750409 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:49.216384 kubelet[2284]: E0307 01:23:49.214775 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:49.216384 kubelet[2284]: E0307 01:23:49.215491 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:49.466383 kubelet[2284]: E0307 01:23:49.449828 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:49.711430 kubelet[2284]: E0307 01:23:49.705762 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:49.811150 kubelet[2284]: E0307 01:23:49.811014 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:49.811854 kubelet[2284]: E0307 01:23:49.811828 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:52.007521 kubelet[2284]: I0307 01:23:52.003909 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:23:53.952907 kubelet[2284]: E0307 01:23:53.952507 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:23:58.303142 kubelet[2284]: E0307 01:23:58.268941 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Mar 7 01:23:58.967754 kubelet[2284]: E0307 01:23:58.963819 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:23:58.967754 kubelet[2284]: E0307 01:23:58.964318 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:02.106756 kubelet[2284]: E0307 01:24:02.103873 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189a6a96fa2691b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:23:19.998181809 +0000 UTC m=+1.438283161,LastTimestamp:2026-03-07 01:23:19.998181809 +0000 UTC m=+1.438283161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:24:02.136586 kubelet[2284]: E0307 01:24:02.105042 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 7 01:24:04.043685 kubelet[2284]: E0307 01:24:04.016773 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:24:05.152443 kubelet[2284]: E0307 01:24:05.150481 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:24:06.563301 kubelet[2284]: E0307 01:24:06.562791 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:24:06.636196 kubelet[2284]: E0307 01:24:06.636004 2284 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:24:06.636196 kubelet[2284]: E0307 01:24:06.636146 2284 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:24:09.126409 kubelet[2284]: I0307 01:24:09.124453 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:24:11.552837 kubelet[2284]: I0307 01:24:11.552031 2284 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:24:11.552837 kubelet[2284]: E0307 01:24:11.552518 2284 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:24:12.189897 kubelet[2284]: E0307 01:24:12.188216 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:12.220670 kubelet[2284]: E0307 01:24:12.220622 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:24:12.237559 kubelet[2284]: E0307 01:24:12.237522 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:12.356619 kubelet[2284]: E0307 01:24:12.356286 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:12.407308 kubelet[2284]: E0307 01:24:12.395879 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="7s" Mar 7 01:24:12.460390 kubelet[2284]: E0307 01:24:12.458948 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:12.759927 kubelet[2284]: E0307 01:24:12.745623 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:12.862526 kubelet[2284]: E0307 01:24:12.861682 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:13.018374 kubelet[2284]: E0307 01:24:13.005173 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:13.110420 kubelet[2284]: E0307 01:24:13.108958 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:13.209800 kubelet[2284]: E0307 01:24:13.209673 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:13.313272 kubelet[2284]: E0307 01:24:13.311435 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:13.530269 kubelet[2284]: E0307 01:24:13.423622 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:13.697185 kubelet[2284]: E0307 01:24:13.667392 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:13.768864 kubelet[2284]: E0307 01:24:13.768712 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:13.899320 kubelet[2284]: E0307 01:24:13.896029 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:14.007143 kubelet[2284]: E0307 01:24:13.999041 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:14.024925 kubelet[2284]: E0307 01:24:14.021392 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:24:14.120419 kubelet[2284]: E0307 01:24:14.099585 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:14.202328 kubelet[2284]: E0307 01:24:14.201179 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:14.313693 kubelet[2284]: E0307 01:24:14.302719 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:14.445141 kubelet[2284]: E0307 01:24:14.433670 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:14.534464 kubelet[2284]: E0307 01:24:14.534027 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:14.709048 kubelet[2284]: E0307 01:24:14.661422 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:14.955233 kubelet[2284]: E0307 01:24:14.953272 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:15.089553 kubelet[2284]: E0307 01:24:15.069053 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:15.249354 kubelet[2284]: E0307 01:24:15.231281 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:15.387974 kubelet[2284]: E0307 01:24:15.380724 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:15.505440 kubelet[2284]: E0307 01:24:15.505200 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:15.607176 kubelet[2284]: E0307 01:24:15.606590 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:15.708282 kubelet[2284]: E0307 01:24:15.707484 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:15.830527 kubelet[2284]: E0307 01:24:15.808544 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:15.928414 kubelet[2284]: E0307 01:24:15.916978 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:16.064778 kubelet[2284]: E0307 01:24:16.025251 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:16.135940 kubelet[2284]: E0307 01:24:16.134311 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:16.243671 kubelet[2284]: E0307 01:24:16.242553 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:16.355197 kubelet[2284]: E0307 01:24:16.353999 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:16.490389 kubelet[2284]: E0307 01:24:16.488989 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:16.590521 kubelet[2284]: E0307 01:24:16.589938 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:16.707857 kubelet[2284]: E0307 01:24:16.697551 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:16.802343 kubelet[2284]: E0307 01:24:16.801918 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:16.943515 kubelet[2284]: E0307 01:24:16.942943 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:17.070687 kubelet[2284]: E0307 01:24:17.069442 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:17.208249 kubelet[2284]: E0307 01:24:17.208183 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:17.315864 kubelet[2284]: E0307 01:24:17.310384 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:17.411190 kubelet[2284]: E0307 01:24:17.410997 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:17.514420 kubelet[2284]: E0307 01:24:17.513345 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:17.616458 kubelet[2284]: E0307 01:24:17.616330 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:17.739141 kubelet[2284]: E0307 01:24:17.726165 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:17.904407 kubelet[2284]: E0307 01:24:17.894148 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.005566 kubelet[2284]: E0307 01:24:17.995042 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.100210 kubelet[2284]: E0307 01:24:18.098668 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.203589 kubelet[2284]: E0307 01:24:18.201596 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.308202 kubelet[2284]: E0307 01:24:18.307577 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.418317 kubelet[2284]: E0307 01:24:18.410001 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.516985 kubelet[2284]: E0307 01:24:18.513592 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.618244 kubelet[2284]: E0307 01:24:18.618139 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.721471 kubelet[2284]: E0307 01:24:18.720214 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.821277 kubelet[2284]: E0307 01:24:18.820900 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:18.941822 kubelet[2284]: E0307 01:24:18.939783 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:19.046905 kubelet[2284]: E0307 01:24:19.046385 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:19.151528 kubelet[2284]: E0307 01:24:19.151154 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:19.290364 kubelet[2284]: E0307 01:24:19.270918 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:19.392999 kubelet[2284]: E0307 01:24:19.392517 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:19.548567 kubelet[2284]: E0307 01:24:19.541028 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:19.645616 kubelet[2284]: E0307 01:24:19.643697 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:19.752804 kubelet[2284]: E0307 01:24:19.749724 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:19.864940 kubelet[2284]: E0307 01:24:19.861468 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:19.988346 kubelet[2284]: E0307 01:24:19.977898 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:20.094753 kubelet[2284]: E0307 01:24:20.092840 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:20.203605 kubelet[2284]: E0307 01:24:20.194203 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:20.326567 kubelet[2284]: E0307 01:24:20.322346 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:20.424483 kubelet[2284]: E0307 01:24:20.424300 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:20.542431 kubelet[2284]: E0307 01:24:20.538327 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:20.644343 kubelet[2284]: E0307 01:24:20.643672 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:20.753435 kubelet[2284]: E0307 01:24:20.752476 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:20.906002 kubelet[2284]: E0307 01:24:20.900654 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.005155 kubelet[2284]: E0307 01:24:21.001934 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.104347 kubelet[2284]: E0307 01:24:21.104293 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.211354 kubelet[2284]: E0307 01:24:21.206981 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.308351 kubelet[2284]: E0307 01:24:21.308272 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.417199 kubelet[2284]: E0307 01:24:21.415623 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.517541 kubelet[2284]: E0307 01:24:21.516716 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.619363 kubelet[2284]: E0307 01:24:21.618210 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.723644 kubelet[2284]: E0307 01:24:21.723584 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.826588 kubelet[2284]: E0307 01:24:21.826424 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:21.932958 kubelet[2284]: E0307 01:24:21.932335 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:22.039609 kubelet[2284]: E0307 01:24:22.035976 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:22.148886 kubelet[2284]: E0307 01:24:22.148806 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:22.251313 kubelet[2284]: E0307 01:24:22.251245 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:22.352971 kubelet[2284]: E0307 01:24:22.352907 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:22.517814 kubelet[2284]: E0307 01:24:22.453824 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:22.557441 kubelet[2284]: E0307 01:24:22.557397 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:22.580623 kubelet[2284]: E0307 01:24:22.571800 2284 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:24:22.663753 kubelet[2284]: E0307 01:24:22.663515 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:22.771915 kubelet[2284]: E0307 01:24:22.770601 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:22.879647 kubelet[2284]: E0307 01:24:22.871837 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:23.007184 kubelet[2284]: E0307 01:24:23.006850 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:23.121632 kubelet[2284]: E0307 01:24:23.119013 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:23.221407 kubelet[2284]: E0307 01:24:23.221335 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:23.324243 kubelet[2284]: E0307 01:24:23.322492 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:23.425231 kubelet[2284]: E0307 01:24:23.424627 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:23.592633 kubelet[2284]: E0307 01:24:23.586851 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:23.703385 kubelet[2284]: E0307 01:24:23.702322 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:23.811241 kubelet[2284]: E0307 01:24:23.803027 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:23.916843 kubelet[2284]: E0307 01:24:23.915546 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:24.019470 kubelet[2284]: E0307 01:24:24.017569 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:24.025730 kubelet[2284]: E0307 01:24:24.025662 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:24:24.120640 kubelet[2284]: E0307 01:24:24.118908 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:24.248611 kubelet[2284]: E0307 01:24:24.246014 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:24.451809 kubelet[2284]: E0307 01:24:24.439935 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:24.555892 kubelet[2284]: E0307 01:24:24.555342 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:24.671455 kubelet[2284]: E0307 01:24:24.668324 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:24.807598 kubelet[2284]: E0307 01:24:24.795456 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:25.108686 kubelet[2284]: E0307 01:24:25.067874 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:25.187382 kubelet[2284]: E0307 01:24:25.185465 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:25.292757 kubelet[2284]: E0307 01:24:25.289978 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:25.356523 systemd[1]: Reloading requested from client PID 2586 ('systemctl') (unit session-7.scope)... Mar 7 01:24:25.356824 systemd[1]: Reloading... Mar 7 01:24:25.518143 kubelet[2284]: E0307 01:24:25.505717 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:25.842291 kubelet[2284]: E0307 01:24:25.842018 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:25.946678 kubelet[2284]: E0307 01:24:25.946631 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:26.150026 kubelet[2284]: E0307 01:24:26.148792 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:26.280624 kubelet[2284]: E0307 01:24:26.269039 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:26.371360 kubelet[2284]: E0307 01:24:26.370969 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:26.505958 kubelet[2284]: E0307 01:24:26.493384 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:26.895011 kubelet[2284]: E0307 01:24:26.894964 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:26.979639 zram_generator::config[2624]: No configuration found. Mar 7 01:24:27.000383 kubelet[2284]: E0307 01:24:27.000334 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:27.100897 kubelet[2284]: E0307 01:24:27.100797 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:27.307462 kubelet[2284]: E0307 01:24:27.260867 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:27.723389 kubelet[2284]: E0307 01:24:27.719491 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:27.836444 kubelet[2284]: E0307 01:24:27.836393 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:27.942912 kubelet[2284]: E0307 01:24:27.942720 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:28.075670 kubelet[2284]: E0307 01:24:28.070551 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:28.086148 kubelet[2284]: E0307 01:24:28.071891 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:24:28.089420 kubelet[2284]: E0307 01:24:28.089374 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:28.335944 kubelet[2284]: E0307 01:24:28.266646 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:28.442404 kubelet[2284]: E0307 01:24:28.430504 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:28.533399 kubelet[2284]: E0307 01:24:28.532979 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:28.718191 kubelet[2284]: E0307 01:24:28.654969 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:28.775335 kubelet[2284]: E0307 01:24:28.772772 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:28.878666 kubelet[2284]: E0307 01:24:28.878449 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:28.988539 kubelet[2284]: E0307 01:24:28.982452 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:29.062287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:24:29.083394 kubelet[2284]: E0307 01:24:29.083226 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:29.103150 kubelet[2284]: E0307 01:24:29.102662 2284 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:24:29.103150 kubelet[2284]: E0307 01:24:29.103007 2284 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:29.204456 kubelet[2284]: E0307 01:24:29.203554 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:29.307657 kubelet[2284]: E0307 01:24:29.305214 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:29.409256 kubelet[2284]: E0307 01:24:29.407956 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:29.510792 kubelet[2284]: E0307 01:24:29.510603 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:29.625417 kubelet[2284]: E0307 01:24:29.615612 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:29.825666 kubelet[2284]: E0307 01:24:29.772257 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:29.955750 kubelet[2284]: E0307 01:24:29.939420 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:30.096257 kubelet[2284]: E0307 01:24:30.084866 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:30.193985 kubelet[2284]: E0307 01:24:30.193801 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:30.301277 systemd[1]: Reloading finished in 4882 ms. Mar 7 01:24:30.339453 kubelet[2284]: E0307 01:24:30.338214 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:30.687646 kubelet[2284]: E0307 01:24:30.605577 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:30.927781 kubelet[2284]: E0307 01:24:30.860922 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:31.024283 kubelet[2284]: E0307 01:24:31.024011 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:31.145790 kubelet[2284]: E0307 01:24:31.145444 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:31.254287 kubelet[2284]: E0307 01:24:31.245696 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:24:31.250002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:24:31.348474 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:24:31.348942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:24:31.349022 systemd[1]: kubelet.service: Consumed 17.831s CPU time, 129.4M memory peak, 0B memory swap peak. Mar 7 01:24:31.425966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:24:34.011283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:24:34.056002 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:24:35.041977 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:24:35.041977 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:24:35.041977 kubelet[2672]: I0307 01:24:35.040609 2672 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:24:35.639259 kubelet[2672]: I0307 01:24:35.638861 2672 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:24:35.639259 kubelet[2672]: I0307 01:24:35.638948 2672 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:24:35.639259 kubelet[2672]: I0307 01:24:35.639291 2672 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:24:35.639259 kubelet[2672]: I0307 01:24:35.639315 2672 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:24:35.651168 kubelet[2672]: I0307 01:24:35.640052 2672 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:24:35.651168 kubelet[2672]: I0307 01:24:35.650378 2672 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:24:35.659950 kubelet[2672]: I0307 01:24:35.656990 2672 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:24:35.700548 kubelet[2672]: E0307 01:24:35.698581 2672 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:24:35.700548 kubelet[2672]: I0307 01:24:35.698767 2672 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:24:36.000314 kubelet[2672]: I0307 01:24:35.995964 2672 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:24:36.000314 kubelet[2672]: I0307 01:24:35.999711 2672 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:24:36.006008 kubelet[2672]: I0307 01:24:35.999792 2672 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:24:36.006008 kubelet[2672]: I0307 01:24:36.002191 2672 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:24:36.006008 kubelet[2672]: I0307 01:24:36.002212 2672 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:24:36.006008 kubelet[2672]: I0307 01:24:36.002304 2672 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:24:36.009254 kubelet[2672]: I0307 01:24:36.008198 2672 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:24:36.012734 kubelet[2672]: I0307 01:24:36.010871 2672 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:24:36.012734 kubelet[2672]: I0307 01:24:36.011235 2672 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:24:36.014205 kubelet[2672]: I0307 01:24:36.014112 2672 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:24:36.014205 kubelet[2672]: I0307 01:24:36.014208 2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:24:36.093372 kubelet[2672]: I0307 01:24:36.088853 2672 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:24:36.097112 kubelet[2672]: I0307 01:24:36.095873 2672 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:24:36.097112 kubelet[2672]: I0307 01:24:36.095926 2672 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:24:36.257164 kubelet[2672]: I0307 01:24:36.254942 2672 server.go:1262] "Started kubelet" Mar 7 01:24:36.298345 kubelet[2672]: I0307 01:24:36.295515 2672 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:24:36.305242 kubelet[2672]: I0307 01:24:36.305155 2672 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:24:36.305242 kubelet[2672]: I0307 01:24:36.310763 2672 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:24:36.305242 kubelet[2672]: I0307 01:24:36.311312 2672 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:24:36.432291 kubelet[2672]: I0307 01:24:36.420548 2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:24:36.432291 kubelet[2672]: I0307 01:24:36.420918 2672 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:24:36.432291 kubelet[2672]: I0307 01:24:36.424385 2672 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:24:36.452973 kubelet[2672]: I0307 01:24:36.451862 2672 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:24:36.452973 kubelet[2672]: I0307 01:24:36.451995 2672 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:24:36.452973 kubelet[2672]: I0307 01:24:36.452345 2672 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:24:36.462627 kubelet[2672]: I0307 01:24:36.462581 2672 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:24:36.463503 kubelet[2672]: I0307 01:24:36.463464 2672 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:24:36.543228 kubelet[2672]: I0307 01:24:36.541992 2672 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:24:36.772769 kubelet[2672]: I0307 01:24:36.772671 2672 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:24:36.806838 kubelet[2672]: I0307 01:24:36.797962 2672 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:24:36.806838 kubelet[2672]: I0307 01:24:36.798007 2672 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:24:36.806838 kubelet[2672]: I0307 01:24:36.798048 2672 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:24:36.806838 kubelet[2672]: E0307 01:24:36.798182 2672 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:24:36.898937 kubelet[2672]: E0307 01:24:36.898359 2672 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:24:37.025758 kubelet[2672]: I0307 01:24:37.025161 2672 apiserver.go:52] "Watching apiserver" Mar 7 01:24:37.091576 kubelet[2672]: I0307 01:24:37.090514 2672 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:24:37.091576 kubelet[2672]: I0307 01:24:37.090547 2672 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:24:37.091576 kubelet[2672]: I0307 01:24:37.090587 2672 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:24:37.091576 kubelet[2672]: I0307 01:24:37.091051 2672 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:24:37.091576 kubelet[2672]: I0307 01:24:37.091140 2672 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:24:37.091576 kubelet[2672]: I0307 01:24:37.091178 2672 policy_none.go:49] "None policy: Start" Mar 7 01:24:37.091576 kubelet[2672]: I0307 01:24:37.091325 2672 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:24:37.091576 kubelet[2672]: I0307 01:24:37.091355 2672 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:24:37.098643 kubelet[2672]: I0307 01:24:37.095976 2672 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:24:37.098643 kubelet[2672]: I0307 01:24:37.096030 2672 policy_none.go:47] "Start" Mar 7 01:24:37.098643 kubelet[2672]: E0307 01:24:37.098596 2672 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:24:37.150799 kubelet[2672]: E0307 01:24:37.149583 2672 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:24:37.150799 kubelet[2672]: I0307 01:24:37.149879 2672 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:24:37.150799 kubelet[2672]: I0307 01:24:37.149895 2672 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:24:37.150799 kubelet[2672]: I0307 01:24:37.150680 2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:24:37.163163 kubelet[2672]: I0307 01:24:37.162996 2672 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:24:37.179797 kubelet[2672]: E0307 01:24:37.166721 2672 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:24:37.190356 containerd[1478]: time="2026-03-07T01:24:37.185494360Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:24:37.191399 kubelet[2672]: I0307 01:24:37.191347 2672 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:24:38.043174 kubelet[2672]: I0307 01:24:38.023876 2672 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:24:38.073683 kubelet[2672]: I0307 01:24:38.062176 2672 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:24:38.075867 kubelet[2672]: I0307 01:24:38.075256 2672 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:24:38.105417 kubelet[2672]: I0307 01:24:38.105367 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f3d44dec3cd523b4bcc0030330b52a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f3d44dec3cd523b4bcc0030330b52a9\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:24:38.106387 kubelet[2672]: I0307 01:24:38.106179 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f3d44dec3cd523b4bcc0030330b52a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f3d44dec3cd523b4bcc0030330b52a9\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:24:38.106387 kubelet[2672]: I0307 01:24:38.106295 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f3d44dec3cd523b4bcc0030330b52a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7f3d44dec3cd523b4bcc0030330b52a9\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:24:38.140047 kubelet[2672]: I0307 01:24:38.138901 2672 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:24:38.335504 kubelet[2672]: I0307 01:24:38.241873 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:24:38.335504 kubelet[2672]: I0307 01:24:38.242051 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:24:38.335504 kubelet[2672]: I0307 01:24:38.242171 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:24:38.335504 kubelet[2672]: I0307 01:24:38.242252 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf8c0a6f-a6da-4a5e-b45a-b9d777714486-kube-proxy\") pod \"kube-proxy-gp4cf\" (UID: \"bf8c0a6f-a6da-4a5e-b45a-b9d777714486\") " pod="kube-system/kube-proxy-gp4cf" Mar 7 01:24:38.335504 kubelet[2672]: I0307 01:24:38.242291 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf8c0a6f-a6da-4a5e-b45a-b9d777714486-lib-modules\") pod \"kube-proxy-gp4cf\" (UID: \"bf8c0a6f-a6da-4a5e-b45a-b9d777714486\") " pod="kube-system/kube-proxy-gp4cf" Mar 7 01:24:38.367811 kubelet[2672]: I0307 01:24:38.242411 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktmr5\" (UniqueName: \"kubernetes.io/projected/bf8c0a6f-a6da-4a5e-b45a-b9d777714486-kube-api-access-ktmr5\") pod \"kube-proxy-gp4cf\" (UID: \"bf8c0a6f-a6da-4a5e-b45a-b9d777714486\") " pod="kube-system/kube-proxy-gp4cf" Mar 7 01:24:38.367811 kubelet[2672]: I0307 01:24:38.242641 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:24:38.367811 kubelet[2672]: I0307 01:24:38.242672 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:24:38.367811 kubelet[2672]: I0307 01:24:38.242698 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:24:38.367811 kubelet[2672]: I0307 01:24:38.242721 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf8c0a6f-a6da-4a5e-b45a-b9d777714486-xtables-lock\") pod \"kube-proxy-gp4cf\" (UID: \"bf8c0a6f-a6da-4a5e-b45a-b9d777714486\") " pod="kube-system/kube-proxy-gp4cf" Mar 7 01:24:38.387171 systemd[1]: Created slice kubepods-besteffort-podbf8c0a6f_a6da_4a5e_b45a_b9d777714486.slice - libcontainer container kubepods-besteffort-podbf8c0a6f_a6da_4a5e_b45a_b9d777714486.slice. Mar 7 01:24:38.433709 kubelet[2672]: I0307 01:24:38.427054 2672 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:24:38.459809 kubelet[2672]: E0307 01:24:38.455454 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:38.507192 kubelet[2672]: E0307 01:24:38.507147 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:38.767011 kubelet[2672]: E0307 01:24:38.760397 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:38.819860 kubelet[2672]: E0307 01:24:38.811754 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:38.864983 kubelet[2672]: I0307 01:24:38.862975 2672 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 01:24:38.864983 kubelet[2672]: I0307 01:24:38.863166 2672 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:24:38.904778 containerd[1478]: time="2026-03-07T01:24:38.904586734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gp4cf,Uid:bf8c0a6f-a6da-4a5e-b45a-b9d777714486,Namespace:kube-system,Attempt:0,}" Mar 7 01:24:39.000512 kubelet[2672]: I0307 01:24:38.999868 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.999812837 podStartE2EDuration="999.812837ms" podCreationTimestamp="2026-03-07 01:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:24:38.999264183 +0000 UTC m=+4.697963164" watchObservedRunningTime="2026-03-07 01:24:38.999812837 +0000 UTC m=+4.698511788" Mar 7 01:24:39.341706 kubelet[2672]: E0307 01:24:39.340372 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:39.341706 kubelet[2672]: E0307 01:24:39.341235 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:39.365660 kubelet[2672]: E0307 01:24:39.356944 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:39.689982 kubelet[2672]: I0307 01:24:39.670753 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.670536663 podStartE2EDuration="1.670536663s" podCreationTimestamp="2026-03-07 01:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:24:39.433775611 +0000 UTC m=+5.132474592" watchObservedRunningTime="2026-03-07 01:24:39.670536663 +0000 UTC m=+5.369235635" Mar 7 01:24:39.706937 kubelet[2672]: I0307 01:24:39.671049 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.67103384 podStartE2EDuration="1.67103384s" podCreationTimestamp="2026-03-07 01:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:24:39.613933892 +0000 UTC m=+5.312632853" watchObservedRunningTime="2026-03-07 01:24:39.67103384 +0000 UTC m=+5.369732802" Mar 7 01:24:40.284658 containerd[1478]: time="2026-03-07T01:24:40.271177592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:24:40.284658 containerd[1478]: time="2026-03-07T01:24:40.271379213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:24:40.284658 containerd[1478]: time="2026-03-07T01:24:40.271401574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:24:40.284658 containerd[1478]: time="2026-03-07T01:24:40.271607752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:24:40.507842 kubelet[2672]: E0307 01:24:40.507194 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:40.527254 kubelet[2672]: E0307 01:24:40.513397 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:40.702726 systemd[1]: Started cri-containerd-3b6dc228c9498aa1f78f6fe2cdda1aa52ffd83aefebdd6eb455d132d59549807.scope - libcontainer container 3b6dc228c9498aa1f78f6fe2cdda1aa52ffd83aefebdd6eb455d132d59549807. Mar 7 01:24:40.762641 systemd[1]: run-containerd-runc-k8s.io-3b6dc228c9498aa1f78f6fe2cdda1aa52ffd83aefebdd6eb455d132d59549807-runc.qDOZpg.mount: Deactivated successfully. Mar 7 01:24:41.383848 containerd[1478]: time="2026-03-07T01:24:41.382405158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gp4cf,Uid:bf8c0a6f-a6da-4a5e-b45a-b9d777714486,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b6dc228c9498aa1f78f6fe2cdda1aa52ffd83aefebdd6eb455d132d59549807\"" Mar 7 01:24:42.136470 kubelet[2672]: E0307 01:24:42.136174 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:42.165293 kubelet[2672]: E0307 01:24:42.165231 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:42.167883 containerd[1478]: time="2026-03-07T01:24:42.167781309Z" level=info msg="CreateContainer within sandbox \"3b6dc228c9498aa1f78f6fe2cdda1aa52ffd83aefebdd6eb455d132d59549807\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:24:42.420791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771657924.mount: Deactivated successfully. Mar 7 01:24:42.512680 containerd[1478]: time="2026-03-07T01:24:42.512587303Z" level=info msg="CreateContainer within sandbox \"3b6dc228c9498aa1f78f6fe2cdda1aa52ffd83aefebdd6eb455d132d59549807\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"941209797b1bd7970a0ea3fbc780df135df749ccfbf3c76bb34e4d1e1adcc091\"" Mar 7 01:24:42.523055 containerd[1478]: time="2026-03-07T01:24:42.518232958Z" level=info msg="StartContainer for \"941209797b1bd7970a0ea3fbc780df135df749ccfbf3c76bb34e4d1e1adcc091\"" Mar 7 01:24:43.237930 systemd[1]: Started cri-containerd-941209797b1bd7970a0ea3fbc780df135df749ccfbf3c76bb34e4d1e1adcc091.scope - libcontainer container 941209797b1bd7970a0ea3fbc780df135df749ccfbf3c76bb34e4d1e1adcc091. Mar 7 01:24:43.655224 kubelet[2672]: E0307 01:24:43.654862 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:44.482459 containerd[1478]: time="2026-03-07T01:24:44.480848406Z" level=info msg="StartContainer for \"941209797b1bd7970a0ea3fbc780df135df749ccfbf3c76bb34e4d1e1adcc091\" returns successfully" Mar 7 01:24:44.727108 kubelet[2672]: E0307 01:24:44.725353 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:44.911158 kubelet[2672]: I0307 01:24:44.906937 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gp4cf" podStartSLOduration=8.906907839 podStartE2EDuration="8.906907839s" podCreationTimestamp="2026-03-07 01:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:24:44.888473179 +0000 UTC m=+10.587172170" watchObservedRunningTime="2026-03-07 01:24:44.906907839 +0000 UTC m=+10.605606800" Mar 7 01:24:45.744882 kubelet[2672]: E0307 01:24:45.740746 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:47.143115 kubelet[2672]: I0307 01:24:47.074288 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/480be3e3-431d-412e-9c22-61f059b2bfdb-cni-plugin\") pod \"kube-flannel-ds-2hnnm\" (UID: \"480be3e3-431d-412e-9c22-61f059b2bfdb\") " pod="kube-flannel/kube-flannel-ds-2hnnm" Mar 7 01:24:47.143115 kubelet[2672]: I0307 01:24:47.074349 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/480be3e3-431d-412e-9c22-61f059b2bfdb-xtables-lock\") pod \"kube-flannel-ds-2hnnm\" (UID: \"480be3e3-431d-412e-9c22-61f059b2bfdb\") " pod="kube-flannel/kube-flannel-ds-2hnnm" Mar 7 01:24:47.143115 kubelet[2672]: I0307 01:24:47.074378 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgnh6\" (UniqueName: \"kubernetes.io/projected/480be3e3-431d-412e-9c22-61f059b2bfdb-kube-api-access-mgnh6\") pod \"kube-flannel-ds-2hnnm\" (UID: \"480be3e3-431d-412e-9c22-61f059b2bfdb\") " pod="kube-flannel/kube-flannel-ds-2hnnm" Mar 7 01:24:47.143115 kubelet[2672]: I0307 01:24:47.074407 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/480be3e3-431d-412e-9c22-61f059b2bfdb-cni\") pod \"kube-flannel-ds-2hnnm\" (UID: \"480be3e3-431d-412e-9c22-61f059b2bfdb\") " pod="kube-flannel/kube-flannel-ds-2hnnm" Mar 7 01:24:47.143115 kubelet[2672]: I0307 01:24:47.074430 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/480be3e3-431d-412e-9c22-61f059b2bfdb-flannel-cfg\") pod \"kube-flannel-ds-2hnnm\" (UID: \"480be3e3-431d-412e-9c22-61f059b2bfdb\") " pod="kube-flannel/kube-flannel-ds-2hnnm" Mar 7 01:24:47.217493 kubelet[2672]: I0307 01:24:47.074455 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/480be3e3-431d-412e-9c22-61f059b2bfdb-run\") pod \"kube-flannel-ds-2hnnm\" (UID: \"480be3e3-431d-412e-9c22-61f059b2bfdb\") " pod="kube-flannel/kube-flannel-ds-2hnnm" Mar 7 01:24:47.324927 systemd[1]: Created slice kubepods-burstable-pod480be3e3_431d_412e_9c22_61f059b2bfdb.slice - libcontainer container kubepods-burstable-pod480be3e3_431d_412e_9c22_61f059b2bfdb.slice. Mar 7 01:24:48.256983 kubelet[2672]: E0307 01:24:48.256855 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:48.294903 containerd[1478]: time="2026-03-07T01:24:48.290257485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2hnnm,Uid:480be3e3-431d-412e-9c22-61f059b2bfdb,Namespace:kube-flannel,Attempt:0,}" Mar 7 01:24:48.407781 sudo[1619]: pam_unix(sudo:session): session closed for user root Mar 7 01:24:48.489912 kubelet[2672]: E0307 01:24:48.489865 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:48.492793 sshd[1614]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:48.548421 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:55630.service: Deactivated successfully. Mar 7 01:24:48.594826 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:24:48.598675 systemd[1]: session-7.scope: Consumed 27.637s CPU time, 162.7M memory peak, 0B memory swap peak. Mar 7 01:24:48.617219 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:24:48.626145 systemd-logind[1464]: Removed session 7. Mar 7 01:24:48.651041 kubelet[2672]: E0307 01:24:48.651000 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:48.725934 containerd[1478]: time="2026-03-07T01:24:48.725669092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:24:48.725934 containerd[1478]: time="2026-03-07T01:24:48.725845213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:24:48.725934 containerd[1478]: time="2026-03-07T01:24:48.725860141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:24:48.726722 containerd[1478]: time="2026-03-07T01:24:48.725995646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:24:48.843421 systemd[1]: Started cri-containerd-c45b69572b13c4a36cbfcc91c2c2103424bcac928ed638a5a8f939ff994e3391.scope - libcontainer container c45b69572b13c4a36cbfcc91c2c2103424bcac928ed638a5a8f939ff994e3391. Mar 7 01:24:49.195929 kubelet[2672]: E0307 01:24:49.191326 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:49.195929 kubelet[2672]: E0307 01:24:49.193776 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:49.403780 containerd[1478]: time="2026-03-07T01:24:49.394230315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2hnnm,Uid:480be3e3-431d-412e-9c22-61f059b2bfdb,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c45b69572b13c4a36cbfcc91c2c2103424bcac928ed638a5a8f939ff994e3391\"" Mar 7 01:24:49.418848 kubelet[2672]: E0307 01:24:49.414484 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:49.420045 containerd[1478]: time="2026-03-07T01:24:49.418776704Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 7 01:24:57.257245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181763429.mount: Deactivated successfully. Mar 7 01:24:59.327541 containerd[1478]: time="2026-03-07T01:24:59.323515242Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:59.363589 containerd[1478]: time="2026-03-07T01:24:59.340382895Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 7 01:24:59.434544 containerd[1478]: time="2026-03-07T01:24:59.420133186Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:59.434544 containerd[1478]: time="2026-03-07T01:24:59.432627652Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:59.469237 containerd[1478]: time="2026-03-07T01:24:59.469137800Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 10.049827023s" Mar 7 01:24:59.483267 containerd[1478]: time="2026-03-07T01:24:59.473909063Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 7 01:24:59.567007 containerd[1478]: time="2026-03-07T01:24:59.564240533Z" level=info msg="CreateContainer within sandbox \"c45b69572b13c4a36cbfcc91c2c2103424bcac928ed638a5a8f939ff994e3391\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 7 01:24:59.719035 containerd[1478]: time="2026-03-07T01:24:59.715426391Z" level=info msg="CreateContainer within sandbox \"c45b69572b13c4a36cbfcc91c2c2103424bcac928ed638a5a8f939ff994e3391\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"4b6624f598fd60f612d9f1765f61435e5bfc00ae1baeae48703b44277d12033f\"" Mar 7 01:24:59.728870 containerd[1478]: time="2026-03-07T01:24:59.725133996Z" level=info msg="StartContainer for \"4b6624f598fd60f612d9f1765f61435e5bfc00ae1baeae48703b44277d12033f\"" Mar 7 01:25:00.605540 systemd[1]: Started cri-containerd-4b6624f598fd60f612d9f1765f61435e5bfc00ae1baeae48703b44277d12033f.scope - libcontainer container 4b6624f598fd60f612d9f1765f61435e5bfc00ae1baeae48703b44277d12033f. Mar 7 01:25:01.852800 systemd[1]: cri-containerd-4b6624f598fd60f612d9f1765f61435e5bfc00ae1baeae48703b44277d12033f.scope: Deactivated successfully. Mar 7 01:25:01.854717 containerd[1478]: time="2026-03-07T01:25:01.853389178Z" level=info msg="StartContainer for \"4b6624f598fd60f612d9f1765f61435e5bfc00ae1baeae48703b44277d12033f\" returns successfully" Mar 7 01:25:02.001341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b6624f598fd60f612d9f1765f61435e5bfc00ae1baeae48703b44277d12033f-rootfs.mount: Deactivated successfully. Mar 7 01:25:02.065287 containerd[1478]: time="2026-03-07T01:25:02.065020535Z" level=info msg="shim disconnected" id=4b6624f598fd60f612d9f1765f61435e5bfc00ae1baeae48703b44277d12033f namespace=k8s.io Mar 7 01:25:02.065287 containerd[1478]: time="2026-03-07T01:25:02.065181697Z" level=warning msg="cleaning up after shim disconnected" id=4b6624f598fd60f612d9f1765f61435e5bfc00ae1baeae48703b44277d12033f namespace=k8s.io Mar 7 01:25:02.065287 containerd[1478]: time="2026-03-07T01:25:02.065206062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:25:02.682156 kubelet[2672]: E0307 01:25:02.679797 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:02.689387 containerd[1478]: time="2026-03-07T01:25:02.687215671Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 7 01:25:17.284599 containerd[1478]: time="2026-03-07T01:25:17.283258769Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:17.284599 containerd[1478]: time="2026-03-07T01:25:17.286239974Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 7 01:25:17.298837 containerd[1478]: time="2026-03-07T01:25:17.291272375Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:17.298923 containerd[1478]: time="2026-03-07T01:25:17.298847889Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:17.311962 containerd[1478]: time="2026-03-07T01:25:17.307893421Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 14.620625984s" Mar 7 01:25:17.311962 containerd[1478]: time="2026-03-07T01:25:17.307988238Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 7 01:25:17.346143 containerd[1478]: time="2026-03-07T01:25:17.334730234Z" level=info msg="CreateContainer within sandbox \"c45b69572b13c4a36cbfcc91c2c2103424bcac928ed638a5a8f939ff994e3391\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:25:17.688592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284063489.mount: Deactivated successfully. Mar 7 01:25:17.729592 containerd[1478]: time="2026-03-07T01:25:17.729222184Z" level=info msg="CreateContainer within sandbox \"c45b69572b13c4a36cbfcc91c2c2103424bcac928ed638a5a8f939ff994e3391\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e9858d4c586f6ea53154566762c83a642be42f96c8098df3012566a90855e001\"" Mar 7 01:25:17.734876 containerd[1478]: time="2026-03-07T01:25:17.733534165Z" level=info msg="StartContainer for \"e9858d4c586f6ea53154566762c83a642be42f96c8098df3012566a90855e001\"" Mar 7 01:25:18.146989 systemd[1]: Started cri-containerd-e9858d4c586f6ea53154566762c83a642be42f96c8098df3012566a90855e001.scope - libcontainer container e9858d4c586f6ea53154566762c83a642be42f96c8098df3012566a90855e001. Mar 7 01:25:18.641173 systemd[1]: cri-containerd-e9858d4c586f6ea53154566762c83a642be42f96c8098df3012566a90855e001.scope: Deactivated successfully. Mar 7 01:25:18.654730 containerd[1478]: time="2026-03-07T01:25:18.652283932Z" level=info msg="StartContainer for \"e9858d4c586f6ea53154566762c83a642be42f96c8098df3012566a90855e001\" returns successfully" Mar 7 01:25:18.683493 kubelet[2672]: I0307 01:25:18.683456 2672 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 01:25:18.719812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9858d4c586f6ea53154566762c83a642be42f96c8098df3012566a90855e001-rootfs.mount: Deactivated successfully. Mar 7 01:25:18.895583 systemd[1]: Created slice kubepods-burstable-pod549a3998_ef43_4b6b_a2c2_c8d425ceb8b9.slice - libcontainer container kubepods-burstable-pod549a3998_ef43_4b6b_a2c2_c8d425ceb8b9.slice. Mar 7 01:25:18.936225 systemd[1]: Created slice kubepods-burstable-pod419451a4_5a1e_414f_b0c1_b8e614a889cb.slice - libcontainer container kubepods-burstable-pod419451a4_5a1e_414f_b0c1_b8e614a889cb.slice. Mar 7 01:25:19.031259 containerd[1478]: time="2026-03-07T01:25:19.030652588Z" level=info msg="shim disconnected" id=e9858d4c586f6ea53154566762c83a642be42f96c8098df3012566a90855e001 namespace=k8s.io Mar 7 01:25:19.031912 containerd[1478]: time="2026-03-07T01:25:19.030736414Z" level=warning msg="cleaning up after shim disconnected" id=e9858d4c586f6ea53154566762c83a642be42f96c8098df3012566a90855e001 namespace=k8s.io Mar 7 01:25:19.031912 containerd[1478]: time="2026-03-07T01:25:19.031621353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:25:19.056537 kubelet[2672]: I0307 01:25:19.055821 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p6b4\" (UniqueName: \"kubernetes.io/projected/419451a4-5a1e-414f-b0c1-b8e614a889cb-kube-api-access-7p6b4\") pod \"coredns-66bc5c9577-22pr7\" (UID: \"419451a4-5a1e-414f-b0c1-b8e614a889cb\") " pod="kube-system/coredns-66bc5c9577-22pr7" Mar 7 01:25:19.056537 kubelet[2672]: I0307 01:25:19.055934 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/549a3998-ef43-4b6b-a2c2-c8d425ceb8b9-config-volume\") pod \"coredns-66bc5c9577-dlv4b\" (UID: \"549a3998-ef43-4b6b-a2c2-c8d425ceb8b9\") " pod="kube-system/coredns-66bc5c9577-dlv4b" Mar 7 01:25:19.056537 kubelet[2672]: I0307 01:25:19.056018 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/419451a4-5a1e-414f-b0c1-b8e614a889cb-config-volume\") pod \"coredns-66bc5c9577-22pr7\" (UID: \"419451a4-5a1e-414f-b0c1-b8e614a889cb\") " pod="kube-system/coredns-66bc5c9577-22pr7" Mar 7 01:25:19.056537 kubelet[2672]: I0307 01:25:19.056048 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98rw\" (UniqueName: \"kubernetes.io/projected/549a3998-ef43-4b6b-a2c2-c8d425ceb8b9-kube-api-access-p98rw\") pod \"coredns-66bc5c9577-dlv4b\" (UID: \"549a3998-ef43-4b6b-a2c2-c8d425ceb8b9\") " pod="kube-system/coredns-66bc5c9577-dlv4b" Mar 7 01:25:19.567248 kubelet[2672]: E0307 01:25:19.561968 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:19.634321 kubelet[2672]: E0307 01:25:19.633592 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:20.020966 containerd[1478]: time="2026-03-07T01:25:20.020573252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22pr7,Uid:419451a4-5a1e-414f-b0c1-b8e614a889cb,Namespace:kube-system,Attempt:0,}" Mar 7 01:25:20.145472 kubelet[2672]: E0307 01:25:20.142603 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:20.163123 containerd[1478]: time="2026-03-07T01:25:20.158895735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dlv4b,Uid:549a3998-ef43-4b6b-a2c2-c8d425ceb8b9,Namespace:kube-system,Attempt:0,}" Mar 7 01:25:21.957482 containerd[1478]: time="2026-03-07T01:25:21.956956549Z" level=info msg="CreateContainer within sandbox \"c45b69572b13c4a36cbfcc91c2c2103424bcac928ed638a5a8f939ff994e3391\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 7 01:25:22.712251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791445900.mount: Deactivated successfully. Mar 7 01:25:22.731604 containerd[1478]: time="2026-03-07T01:25:22.730422617Z" level=info msg="CreateContainer within sandbox \"c45b69572b13c4a36cbfcc91c2c2103424bcac928ed638a5a8f939ff994e3391\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"44793b4b828031a7ed72631cc8b522d5787fdf72b3a3564731349fbd0681a675\"" Mar 7 01:25:22.744919 containerd[1478]: time="2026-03-07T01:25:22.735574532Z" level=info msg="StartContainer for \"44793b4b828031a7ed72631cc8b522d5787fdf72b3a3564731349fbd0681a675\"" Mar 7 01:25:22.789811 containerd[1478]: time="2026-03-07T01:25:22.789707129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22pr7,Uid:419451a4-5a1e-414f-b0c1-b8e614a889cb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0eff55bb1eaec1243ee169c4ff66e0228a9d10c3ff200e00021b10a3cb4e13c4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:25:22.790808 kubelet[2672]: E0307 01:25:22.790718 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eff55bb1eaec1243ee169c4ff66e0228a9d10c3ff200e00021b10a3cb4e13c4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:25:22.791908 kubelet[2672]: E0307 01:25:22.791868 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eff55bb1eaec1243ee169c4ff66e0228a9d10c3ff200e00021b10a3cb4e13c4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-22pr7" Mar 7 01:25:22.795627 kubelet[2672]: E0307 01:25:22.792039 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eff55bb1eaec1243ee169c4ff66e0228a9d10c3ff200e00021b10a3cb4e13c4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-22pr7" Mar 7 01:25:22.795627 kubelet[2672]: E0307 01:25:22.792297 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-22pr7_kube-system(419451a4-5a1e-414f-b0c1-b8e614a889cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-22pr7_kube-system(419451a4-5a1e-414f-b0c1-b8e614a889cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0eff55bb1eaec1243ee169c4ff66e0228a9d10c3ff200e00021b10a3cb4e13c4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-22pr7" podUID="419451a4-5a1e-414f-b0c1-b8e614a889cb" Mar 7 01:25:22.896610 containerd[1478]: time="2026-03-07T01:25:22.893193722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dlv4b,Uid:549a3998-ef43-4b6b-a2c2-c8d425ceb8b9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3572a1be1d934df9972860874d4f4cf2a4c284a5f4220937f5357ed138a909d2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:25:22.899006 kubelet[2672]: E0307 01:25:22.893625 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3572a1be1d934df9972860874d4f4cf2a4c284a5f4220937f5357ed138a909d2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:25:22.899006 kubelet[2672]: E0307 01:25:22.893690 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3572a1be1d934df9972860874d4f4cf2a4c284a5f4220937f5357ed138a909d2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-dlv4b" Mar 7 01:25:22.899006 kubelet[2672]: E0307 01:25:22.893718 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3572a1be1d934df9972860874d4f4cf2a4c284a5f4220937f5357ed138a909d2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-dlv4b" Mar 7 01:25:22.899006 kubelet[2672]: E0307 01:25:22.895685 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dlv4b_kube-system(549a3998-ef43-4b6b-a2c2-c8d425ceb8b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dlv4b_kube-system(549a3998-ef43-4b6b-a2c2-c8d425ceb8b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3572a1be1d934df9972860874d4f4cf2a4c284a5f4220937f5357ed138a909d2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-dlv4b" podUID="549a3998-ef43-4b6b-a2c2-c8d425ceb8b9" Mar 7 01:25:22.903999 systemd[1]: run-netns-cni\x2d97284847\x2dfb8a\x2d696d\x2de04d\x2d458c78b908d9.mount: Deactivated successfully. Mar 7 01:25:22.904269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3572a1be1d934df9972860874d4f4cf2a4c284a5f4220937f5357ed138a909d2-shm.mount: Deactivated successfully. Mar 7 01:25:22.904395 systemd[1]: run-netns-cni\x2d1a4d3992\x2d0f4d\x2d1b39\x2dd6c2\x2d21aaf7a59c97.mount: Deactivated successfully. Mar 7 01:25:22.904607 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0eff55bb1eaec1243ee169c4ff66e0228a9d10c3ff200e00021b10a3cb4e13c4-shm.mount: Deactivated successfully. Mar 7 01:25:22.944360 systemd[1]: Started cri-containerd-44793b4b828031a7ed72631cc8b522d5787fdf72b3a3564731349fbd0681a675.scope - libcontainer container 44793b4b828031a7ed72631cc8b522d5787fdf72b3a3564731349fbd0681a675. Mar 7 01:25:23.626629 containerd[1478]: time="2026-03-07T01:25:23.626436027Z" level=info msg="StartContainer for \"44793b4b828031a7ed72631cc8b522d5787fdf72b3a3564731349fbd0681a675\" returns successfully" Mar 7 01:25:24.635138 kubelet[2672]: E0307 01:25:24.634365 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:25.541132 systemd-networkd[1401]: flannel.1: Link UP Mar 7 01:25:25.541173 systemd-networkd[1401]: flannel.1: Gained carrier Mar 7 01:25:25.682930 kubelet[2672]: E0307 01:25:25.679047 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:27.248330 systemd-networkd[1401]: flannel.1: Gained IPv6LL Mar 7 01:25:34.933177 kubelet[2672]: E0307 01:25:34.927446 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:34.951575 containerd[1478]: time="2026-03-07T01:25:34.939461326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dlv4b,Uid:549a3998-ef43-4b6b-a2c2-c8d425ceb8b9,Namespace:kube-system,Attempt:0,}" Mar 7 01:25:35.296300 systemd-networkd[1401]: cni0: Link UP Mar 7 01:25:35.296310 systemd-networkd[1401]: cni0: Gained carrier Mar 7 01:25:35.333484 systemd-networkd[1401]: cni0: Lost carrier Mar 7 01:25:35.479287 systemd-networkd[1401]: veth9adb42d2: Link UP Mar 7 01:25:35.525593 kernel: cni0: port 1(veth9adb42d2) entered blocking state Mar 7 01:25:35.526012 kernel: cni0: port 1(veth9adb42d2) entered disabled state Mar 7 01:25:35.527360 kernel: veth9adb42d2: entered allmulticast mode Mar 7 01:25:35.546159 kernel: veth9adb42d2: entered promiscuous mode Mar 7 01:25:35.546282 kernel: cni0: port 1(veth9adb42d2) entered blocking state Mar 7 01:25:35.566960 kernel: cni0: port 1(veth9adb42d2) entered forwarding state Mar 7 01:25:35.608518 kernel: cni0: port 1(veth9adb42d2) entered disabled state Mar 7 01:25:35.716392 kernel: cni0: port 1(veth9adb42d2) entered blocking state Mar 7 01:25:35.716518 kernel: cni0: port 1(veth9adb42d2) entered forwarding state Mar 7 01:25:35.714580 systemd-networkd[1401]: veth9adb42d2: Gained carrier Mar 7 01:25:35.722329 systemd-networkd[1401]: cni0: Gained carrier Mar 7 01:25:35.779537 containerd[1478]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Mar 7 01:25:35.779537 containerd[1478]: delegateAdd: netconf sent to delegate plugin: Mar 7 01:25:36.110352 containerd[1478]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-07T01:25:36.109363602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:25:36.110352 containerd[1478]: time="2026-03-07T01:25:36.109546051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:25:36.110352 containerd[1478]: time="2026-03-07T01:25:36.109567701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:36.112581 containerd[1478]: time="2026-03-07T01:25:36.111296112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:36.269025 systemd[1]: Started cri-containerd-dd7030db2fba04b68323b884d93c234fb429b14ddee565eea67febbd3002c89d.scope - libcontainer container dd7030db2fba04b68323b884d93c234fb429b14ddee565eea67febbd3002c89d. Mar 7 01:25:36.396406 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:25:36.656292 containerd[1478]: time="2026-03-07T01:25:36.656045921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dlv4b,Uid:549a3998-ef43-4b6b-a2c2-c8d425ceb8b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd7030db2fba04b68323b884d93c234fb429b14ddee565eea67febbd3002c89d\"" Mar 7 01:25:36.663790 systemd-networkd[1401]: cni0: Gained IPv6LL Mar 7 01:25:36.680844 kubelet[2672]: E0307 01:25:36.679636 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:36.721004 containerd[1478]: time="2026-03-07T01:25:36.720888476Z" level=info msg="CreateContainer within sandbox \"dd7030db2fba04b68323b884d93c234fb429b14ddee565eea67febbd3002c89d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:25:36.873339 kubelet[2672]: E0307 01:25:36.847177 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:36.872198 systemd-networkd[1401]: veth9adb42d2: Gained IPv6LL Mar 7 01:25:36.876847 containerd[1478]: time="2026-03-07T01:25:36.862553242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22pr7,Uid:419451a4-5a1e-414f-b0c1-b8e614a889cb,Namespace:kube-system,Attempt:0,}" Mar 7 01:25:36.957149 containerd[1478]: time="2026-03-07T01:25:36.949902508Z" level=info msg="CreateContainer within sandbox \"dd7030db2fba04b68323b884d93c234fb429b14ddee565eea67febbd3002c89d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7fc2e5d37d787de37351b81d91f0b73026057e577a13449d851983a1763e88bb\"" Mar 7 01:25:36.957149 containerd[1478]: time="2026-03-07T01:25:36.955181947Z" level=info msg="StartContainer for \"7fc2e5d37d787de37351b81d91f0b73026057e577a13449d851983a1763e88bb\"" Mar 7 01:25:37.305446 kernel: cni0: port 2(veth657852de) entered blocking state Mar 7 01:25:37.305866 kernel: cni0: port 2(veth657852de) entered disabled state Mar 7 01:25:37.304758 systemd-networkd[1401]: veth657852de: Link UP Mar 7 01:25:37.334164 kernel: veth657852de: entered allmulticast mode Mar 7 01:25:37.359795 kernel: veth657852de: entered promiscuous mode Mar 7 01:25:37.557607 kernel: cni0: port 2(veth657852de) entered blocking state Mar 7 01:25:37.557837 kernel: cni0: port 2(veth657852de) entered forwarding state Mar 7 01:25:37.547992 systemd-networkd[1401]: veth657852de: Gained carrier Mar 7 01:25:37.574488 systemd[1]: Started cri-containerd-7fc2e5d37d787de37351b81d91f0b73026057e577a13449d851983a1763e88bb.scope - libcontainer container 7fc2e5d37d787de37351b81d91f0b73026057e577a13449d851983a1763e88bb. Mar 7 01:25:37.606559 containerd[1478]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000012970), "name":"cbr0", "type":"bridge"} Mar 7 01:25:37.606559 containerd[1478]: delegateAdd: netconf sent to delegate plugin: Mar 7 01:25:38.165478 containerd[1478]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-07T01:25:38.157630798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:25:38.165478 containerd[1478]: time="2026-03-07T01:25:38.164908817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:25:38.165478 containerd[1478]: time="2026-03-07T01:25:38.164939513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:38.165478 containerd[1478]: time="2026-03-07T01:25:38.165213352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:38.245462 containerd[1478]: time="2026-03-07T01:25:38.243540561Z" level=info msg="StartContainer for \"7fc2e5d37d787de37351b81d91f0b73026057e577a13449d851983a1763e88bb\" returns successfully" Mar 7 01:25:38.386129 systemd[1]: run-containerd-runc-k8s.io-896cf6abc9b39f589e98ed07e88eae400bac5483f06d2ced84c48d667876e7df-runc.FxyYXV.mount: Deactivated successfully. Mar 7 01:25:38.462614 systemd[1]: Started cri-containerd-896cf6abc9b39f589e98ed07e88eae400bac5483f06d2ced84c48d667876e7df.scope - libcontainer container 896cf6abc9b39f589e98ed07e88eae400bac5483f06d2ced84c48d667876e7df. Mar 7 01:25:38.530993 kubelet[2672]: E0307 01:25:38.530322 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:38.649170 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:25:38.723608 kubelet[2672]: I0307 01:25:38.723155 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-2hnnm" podStartSLOduration=24.829236189 podStartE2EDuration="52.723129222s" podCreationTimestamp="2026-03-07 01:24:46 +0000 UTC" firstStartedPulling="2026-03-07 01:24:49.417861977 +0000 UTC m=+15.116560928" lastFinishedPulling="2026-03-07 01:25:17.311755011 +0000 UTC m=+43.010453961" observedRunningTime="2026-03-07 01:25:24.709529753 +0000 UTC m=+50.408228704" watchObservedRunningTime="2026-03-07 01:25:38.723129222 +0000 UTC m=+64.421828204" Mar 7 01:25:39.102852 containerd[1478]: time="2026-03-07T01:25:39.101056365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22pr7,Uid:419451a4-5a1e-414f-b0c1-b8e614a889cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"896cf6abc9b39f589e98ed07e88eae400bac5483f06d2ced84c48d667876e7df\"" Mar 7 01:25:39.137241 kubelet[2672]: E0307 01:25:39.127324 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:39.196341 containerd[1478]: time="2026-03-07T01:25:39.196229093Z" level=info msg="CreateContainer within sandbox \"896cf6abc9b39f589e98ed07e88eae400bac5483f06d2ced84c48d667876e7df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:25:39.253387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423757314.mount: Deactivated successfully. Mar 7 01:25:39.306474 containerd[1478]: time="2026-03-07T01:25:39.305004678Z" level=info msg="CreateContainer within sandbox \"896cf6abc9b39f589e98ed07e88eae400bac5483f06d2ced84c48d667876e7df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e15322020a0f8cf032362ef855091bfcf99dbe2bb9f6e61ad33826fa456d8d2\"" Mar 7 01:25:39.313775 containerd[1478]: time="2026-03-07T01:25:39.313692143Z" level=info msg="StartContainer for \"2e15322020a0f8cf032362ef855091bfcf99dbe2bb9f6e61ad33826fa456d8d2\"" Mar 7 01:25:39.499346 systemd-networkd[1401]: veth657852de: Gained IPv6LL Mar 7 01:25:39.633853 kubelet[2672]: E0307 01:25:39.632006 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:39.692514 systemd[1]: Started cri-containerd-2e15322020a0f8cf032362ef855091bfcf99dbe2bb9f6e61ad33826fa456d8d2.scope - libcontainer container 2e15322020a0f8cf032362ef855091bfcf99dbe2bb9f6e61ad33826fa456d8d2. Mar 7 01:25:39.727990 kubelet[2672]: I0307 01:25:39.722225 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dlv4b" podStartSLOduration=63.722199264 podStartE2EDuration="1m3.722199264s" podCreationTimestamp="2026-03-07 01:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:25:38.724604463 +0000 UTC m=+64.423303444" watchObservedRunningTime="2026-03-07 01:25:39.722199264 +0000 UTC m=+65.420898215" Mar 7 01:25:40.068471 containerd[1478]: time="2026-03-07T01:25:40.068330178Z" level=info msg="StartContainer for \"2e15322020a0f8cf032362ef855091bfcf99dbe2bb9f6e61ad33826fa456d8d2\" returns successfully" Mar 7 01:25:40.934280 kubelet[2672]: E0307 01:25:40.918611 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:40.962500 kubelet[2672]: E0307 01:25:40.953567 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:41.229589 kubelet[2672]: I0307 01:25:41.219879 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-22pr7" podStartSLOduration=65.219852487 podStartE2EDuration="1m5.219852487s" podCreationTimestamp="2026-03-07 01:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:25:41.21460269 +0000 UTC m=+66.913301651" watchObservedRunningTime="2026-03-07 01:25:41.219852487 +0000 UTC m=+66.918551529" Mar 7 01:25:41.955920 kubelet[2672]: E0307 01:25:41.946751 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:41.955920 kubelet[2672]: E0307 01:25:41.950827 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:42.952400 kubelet[2672]: E0307 01:25:42.950731 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:52.801863 kubelet[2672]: E0307 01:25:52.800371 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:57.828165 kubelet[2672]: E0307 01:25:57.826532 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:04.806550 kubelet[2672]: E0307 01:26:04.804052 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:14.320456 kubelet[2672]: E0307 01:26:14.318626 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:16.981486 kubelet[2672]: E0307 01:26:16.975001 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.263s" Mar 7 01:26:33.365758 systemd[1]: cri-containerd-71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6.scope: Deactivated successfully. Mar 7 01:26:35.020709 systemd[1]: cri-containerd-71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6.scope: Consumed 24.552s CPU time, 20.3M memory peak, 0B memory swap peak. Mar 7 01:26:39.325526 systemd[1]: cri-containerd-bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597.scope: Deactivated successfully. Mar 7 01:26:39.339657 systemd[1]: cri-containerd-bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597.scope: Consumed 14.348s CPU time, 18.4M memory peak, 0B memory swap peak. Mar 7 01:26:39.423741 kubelet[2672]: E0307 01:26:39.423505 2672 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 7 01:26:42.945533 kubelet[2672]: E0307 01:26:42.913554 2672 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice/cri-containerd-71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6.scope\": RecentStats: unable to find data in memory cache]" Mar 7 01:26:43.012264 kubelet[2672]: E0307 01:26:43.008830 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="25.836s" Mar 7 01:26:43.317891 kubelet[2672]: E0307 01:26:43.312383 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:43.323606 kubelet[2672]: E0307 01:26:43.323559 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:43.444628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597-rootfs.mount: Deactivated successfully. Mar 7 01:26:43.534938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6-rootfs.mount: Deactivated successfully. Mar 7 01:26:43.545167 containerd[1478]: time="2026-03-07T01:26:43.544690154Z" level=info msg="shim disconnected" id=bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597 namespace=k8s.io Mar 7 01:26:43.546032 containerd[1478]: time="2026-03-07T01:26:43.545940634Z" level=warning msg="cleaning up after shim disconnected" id=bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597 namespace=k8s.io Mar 7 01:26:43.546349 containerd[1478]: time="2026-03-07T01:26:43.546243302Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:26:43.558339 containerd[1478]: time="2026-03-07T01:26:43.558193356Z" level=info msg="shim disconnected" id=71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6 namespace=k8s.io Mar 7 01:26:43.558736 containerd[1478]: time="2026-03-07T01:26:43.558611528Z" level=warning msg="cleaning up after shim disconnected" id=71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6 namespace=k8s.io Mar 7 01:26:43.559175 containerd[1478]: time="2026-03-07T01:26:43.558943571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:26:43.803590 kubelet[2672]: E0307 01:26:43.802322 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:43.853881 containerd[1478]: time="2026-03-07T01:26:43.853260076Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:26:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:26:43.940479 kubelet[2672]: I0307 01:26:43.940400 2672 scope.go:117] "RemoveContainer" containerID="bf187d81742c00c7da8c68bb7a27576a15bcb656a450fa5c498157dfe6f34597" Mar 7 01:26:43.940706 kubelet[2672]: E0307 01:26:43.940555 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:44.014220 containerd[1478]: time="2026-03-07T01:26:43.950481914Z" level=info msg="CreateContainer within sandbox \"5c97ffc85414eb516fe8b032516e2c8dc47a61c5b0fedbc0eb5e0e330429ee96\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 01:26:44.014400 kubelet[2672]: I0307 01:26:43.952608 2672 scope.go:117] "RemoveContainer" containerID="71faa817a4eed3d9c317e21a316d4b0d33d392c45335f4d95da46f9c5c4534c6" Mar 7 01:26:44.014400 kubelet[2672]: E0307 01:26:43.952708 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:44.076954 containerd[1478]: time="2026-03-07T01:26:44.027692488Z" level=info msg="CreateContainer within sandbox \"03ea133b85d020df62183ad9adf5498cf31398b1ec04b4c8c065639457c9e8b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:26:44.431002 containerd[1478]: time="2026-03-07T01:26:44.429383477Z" level=info msg="CreateContainer within sandbox \"5c97ffc85414eb516fe8b032516e2c8dc47a61c5b0fedbc0eb5e0e330429ee96\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2b801b6284fe01801e383961d13865c29959d847e14f826cebfb7753a6d8b933\"" Mar 7 01:26:44.446360 containerd[1478]: time="2026-03-07T01:26:44.440599956Z" level=info msg="StartContainer for \"2b801b6284fe01801e383961d13865c29959d847e14f826cebfb7753a6d8b933\"" Mar 7 01:26:44.481161 containerd[1478]: time="2026-03-07T01:26:44.473687874Z" level=info msg="CreateContainer within sandbox \"03ea133b85d020df62183ad9adf5498cf31398b1ec04b4c8c065639457c9e8b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"61c85fdb06d47548661534856e6dcd97d926d51593d7fc33f4a985455687b994\"" Mar 7 01:26:44.491146 containerd[1478]: time="2026-03-07T01:26:44.483272118Z" level=info msg="StartContainer for \"61c85fdb06d47548661534856e6dcd97d926d51593d7fc33f4a985455687b994\"" Mar 7 01:26:45.179523 systemd[1]: Started cri-containerd-2b801b6284fe01801e383961d13865c29959d847e14f826cebfb7753a6d8b933.scope - libcontainer container 2b801b6284fe01801e383961d13865c29959d847e14f826cebfb7753a6d8b933. Mar 7 01:26:45.197979 systemd[1]: Started cri-containerd-61c85fdb06d47548661534856e6dcd97d926d51593d7fc33f4a985455687b994.scope - libcontainer container 61c85fdb06d47548661534856e6dcd97d926d51593d7fc33f4a985455687b994. Mar 7 01:26:45.705655 containerd[1478]: time="2026-03-07T01:26:45.701956588Z" level=info msg="StartContainer for \"2b801b6284fe01801e383961d13865c29959d847e14f826cebfb7753a6d8b933\" returns successfully" Mar 7 01:26:45.705655 containerd[1478]: time="2026-03-07T01:26:45.704294523Z" level=info msg="StartContainer for \"61c85fdb06d47548661534856e6dcd97d926d51593d7fc33f4a985455687b994\" returns successfully" Mar 7 01:26:46.057446 kubelet[2672]: E0307 01:26:46.055358 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:46.105588 kubelet[2672]: E0307 01:26:46.101666 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:47.128013 kubelet[2672]: E0307 01:26:47.118792 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:47.128013 kubelet[2672]: E0307 01:26:47.119603 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:47.925921 kubelet[2672]: E0307 01:26:47.925449 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:48.486746 kubelet[2672]: E0307 01:26:48.483984 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:48.656825 kubelet[2672]: E0307 01:26:48.652423 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:58.539343 kubelet[2672]: E0307 01:26:58.538325 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:58.692220 kubelet[2672]: E0307 01:26:58.688378 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:59.344269 kubelet[2672]: E0307 01:26:59.338007 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:27:04.818352 kubelet[2672]: E0307 01:27:04.812602 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:27:10.819325 kubelet[2672]: E0307 01:27:10.819282 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:27:35.412293 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:58274.service - OpenSSH per-connection server daemon (10.0.0.1:58274). Mar 7 01:27:35.730214 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 58274 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:35.741478 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:35.846913 systemd-logind[1464]: New session 8 of user core. Mar 7 01:27:35.884209 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:27:36.964735 sshd[4107]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:36.995777 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:27:37.031384 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:58274.service: Deactivated successfully. Mar 7 01:27:37.045288 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:27:37.049234 systemd-logind[1464]: Removed session 8. Mar 7 01:27:42.063631 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:42060.service - OpenSSH per-connection server daemon (10.0.0.1:42060). Mar 7 01:27:42.273018 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 42060 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:42.283555 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:42.311755 systemd-logind[1464]: New session 9 of user core. Mar 7 01:27:42.330217 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:27:43.059235 sshd[4152]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:43.099638 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:42060.service: Deactivated successfully. Mar 7 01:27:43.114564 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:27:43.137701 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:27:43.141456 systemd-logind[1464]: Removed session 9. Mar 7 01:27:43.813448 kubelet[2672]: E0307 01:27:43.806797 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:27:48.162914 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:42064.service - OpenSSH per-connection server daemon (10.0.0.1:42064). Mar 7 01:27:48.435002 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 42064 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:48.444738 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:48.511204 systemd-logind[1464]: New session 10 of user core. Mar 7 01:27:48.572605 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:27:49.927765 sshd[4196]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:49.988543 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:42064.service: Deactivated successfully. Mar 7 01:27:50.036496 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:27:50.052554 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:27:50.116013 systemd-logind[1464]: Removed session 10. Mar 7 01:27:55.100617 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:50374.service - OpenSSH per-connection server daemon (10.0.0.1:50374). Mar 7 01:27:55.392427 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 50374 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:55.413283 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:55.517470 systemd-logind[1464]: New session 11 of user core. Mar 7 01:27:55.539635 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:27:56.625476 sshd[4250]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:56.717542 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:50374.service: Deactivated successfully. Mar 7 01:27:56.748965 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:27:56.788494 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:27:56.817542 systemd-logind[1464]: Removed session 11. Mar 7 01:28:01.698737 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:55072.service - OpenSSH per-connection server daemon (10.0.0.1:55072). Mar 7 01:28:01.800333 kubelet[2672]: E0307 01:28:01.798966 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:01.954267 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 55072 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:01.965569 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:02.023824 systemd-logind[1464]: New session 12 of user core. Mar 7 01:28:02.070893 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:28:03.357794 sshd[4286]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:03.384906 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:55072.service: Deactivated successfully. Mar 7 01:28:03.406618 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:28:03.418116 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:28:03.449738 systemd-logind[1464]: Removed session 12. Mar 7 01:28:05.805526 kubelet[2672]: E0307 01:28:05.801161 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:07.823136 kubelet[2672]: E0307 01:28:07.799564 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:08.454260 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:55110.service - OpenSSH per-connection server daemon (10.0.0.1:55110). Mar 7 01:28:08.796606 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 55110 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:08.808640 sshd[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:08.864887 systemd-logind[1464]: New session 13 of user core. Mar 7 01:28:08.922044 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:28:09.872701 sshd[4327]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:09.899805 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:55110.service: Deactivated successfully. Mar 7 01:28:09.928689 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:28:09.930243 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:28:09.939434 systemd-logind[1464]: Removed session 13. Mar 7 01:28:15.001793 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:55230.service - OpenSSH per-connection server daemon (10.0.0.1:55230). Mar 7 01:28:15.391976 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 55230 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:15.417836 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:15.558918 systemd-logind[1464]: New session 14 of user core. Mar 7 01:28:15.628737 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:28:15.843843 kubelet[2672]: E0307 01:28:15.817916 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:16.966482 sshd[4375]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:17.069945 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:55230.service: Deactivated successfully. Mar 7 01:28:17.124234 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:28:17.155222 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:28:17.170395 systemd-logind[1464]: Removed session 14. Mar 7 01:28:22.059529 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:37144.service - OpenSSH per-connection server daemon (10.0.0.1:37144). Mar 7 01:28:22.511277 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 37144 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:22.523556 sshd[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:22.601203 systemd-logind[1464]: New session 15 of user core. Mar 7 01:28:22.679276 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:28:22.826655 kubelet[2672]: E0307 01:28:22.819015 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:23.585627 sshd[4418]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:23.620238 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:28:23.628954 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:37144.service: Deactivated successfully. Mar 7 01:28:23.651942 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:28:23.714566 systemd-logind[1464]: Removed session 15. Mar 7 01:28:28.732295 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:37290.service - OpenSSH per-connection server daemon (10.0.0.1:37290). Mar 7 01:28:28.996896 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 37290 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:28.998603 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:29.054202 systemd-logind[1464]: New session 16 of user core. Mar 7 01:28:29.077665 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:28:29.863255 sshd[4457]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:29.912497 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:37290.service: Deactivated successfully. Mar 7 01:28:29.924753 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:28:29.970273 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:28:30.010283 systemd-logind[1464]: Removed session 16. Mar 7 01:28:31.805161 kubelet[2672]: E0307 01:28:31.803714 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:34.941302 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:35874.service - OpenSSH per-connection server daemon (10.0.0.1:35874). Mar 7 01:28:35.147613 sshd[4495]: Accepted publickey for core from 10.0.0.1 port 35874 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:35.162386 sshd[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:35.221241 systemd-logind[1464]: New session 17 of user core. Mar 7 01:28:35.273262 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:28:35.859630 sshd[4495]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:35.885721 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:35874.service: Deactivated successfully. Mar 7 01:28:35.897714 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:28:35.901661 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:28:35.912528 systemd-logind[1464]: Removed session 17. Mar 7 01:28:40.967993 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:52436.service - OpenSSH per-connection server daemon (10.0.0.1:52436). Mar 7 01:28:41.210134 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 52436 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:41.228239 sshd[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:41.292417 systemd-logind[1464]: New session 18 of user core. Mar 7 01:28:41.318714 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:28:42.259858 sshd[4532]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:42.338362 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:52436.service: Deactivated successfully. Mar 7 01:28:42.392017 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:28:42.433321 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:28:42.447895 systemd-logind[1464]: Removed session 18. Mar 7 01:28:47.410581 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:52458.service - OpenSSH per-connection server daemon (10.0.0.1:52458). Mar 7 01:28:47.691031 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 52458 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:47.697578 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:47.756038 systemd-logind[1464]: New session 19 of user core. Mar 7 01:28:47.820345 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:28:48.919648 sshd[4581]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:48.944156 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:52458.service: Deactivated successfully. Mar 7 01:28:48.966666 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:28:48.996729 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:28:49.005959 systemd-logind[1464]: Removed session 19. Mar 7 01:28:52.805741 kubelet[2672]: E0307 01:28:52.802501 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:54.042847 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:48736.service - OpenSSH per-connection server daemon (10.0.0.1:48736). Mar 7 01:28:54.429445 sshd[4619]: Accepted publickey for core from 10.0.0.1 port 48736 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:54.443240 sshd[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:54.500729 systemd-logind[1464]: New session 20 of user core. Mar 7 01:28:54.574724 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:28:55.643844 sshd[4619]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:55.710375 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:48736.service: Deactivated successfully. Mar 7 01:28:55.723258 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:28:55.745034 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:28:55.754884 systemd-logind[1464]: Removed session 20. Mar 7 01:29:00.737646 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:54502.service - OpenSSH per-connection server daemon (10.0.0.1:54502). Mar 7 01:29:00.968800 sshd[4662]: Accepted publickey for core from 10.0.0.1 port 54502 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:00.990886 sshd[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:01.020159 systemd-logind[1464]: New session 21 of user core. Mar 7 01:29:01.046644 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:29:01.755710 sshd[4662]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:01.795907 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:54502.service: Deactivated successfully. Mar 7 01:29:01.831266 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:29:01.850797 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:29:01.857015 systemd-logind[1464]: Removed session 21. Mar 7 01:29:06.831475 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:54518.service - OpenSSH per-connection server daemon (10.0.0.1:54518). Mar 7 01:29:07.009194 sshd[4698]: Accepted publickey for core from 10.0.0.1 port 54518 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:07.012312 sshd[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:07.109759 systemd-logind[1464]: New session 22 of user core. Mar 7 01:29:07.150566 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:29:07.673151 sshd[4698]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:07.713931 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:54518.service: Deactivated successfully. Mar 7 01:29:07.728173 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:29:07.742164 systemd-logind[1464]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:29:07.750903 systemd-logind[1464]: Removed session 22. Mar 7 01:29:08.808631 kubelet[2672]: E0307 01:29:08.806403 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:12.829724 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:42566.service - OpenSSH per-connection server daemon (10.0.0.1:42566). Mar 7 01:29:13.102340 sshd[4733]: Accepted publickey for core from 10.0.0.1 port 42566 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:13.117623 sshd[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:13.174408 systemd-logind[1464]: New session 23 of user core. Mar 7 01:29:13.215401 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:29:13.966157 sshd[4733]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:14.041147 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:42566.service: Deactivated successfully. Mar 7 01:29:14.064031 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:29:14.106863 systemd-logind[1464]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:29:14.153548 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:42572.service - OpenSSH per-connection server daemon (10.0.0.1:42572). Mar 7 01:29:14.174204 systemd-logind[1464]: Removed session 23. Mar 7 01:29:14.446954 sshd[4762]: Accepted publickey for core from 10.0.0.1 port 42572 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:14.449273 sshd[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:14.501964 systemd-logind[1464]: New session 24 of user core. Mar 7 01:29:14.525454 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:29:15.884981 sshd[4762]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:15.947825 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:42572.service: Deactivated successfully. Mar 7 01:29:15.962395 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:29:15.972441 systemd-logind[1464]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:29:16.040929 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:42584.service - OpenSSH per-connection server daemon (10.0.0.1:42584). Mar 7 01:29:16.043757 systemd-logind[1464]: Removed session 24. Mar 7 01:29:16.240529 sshd[4779]: Accepted publickey for core from 10.0.0.1 port 42584 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:16.248883 sshd[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:16.274166 systemd-logind[1464]: New session 25 of user core. Mar 7 01:29:16.317278 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:29:17.124902 sshd[4779]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:17.150694 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:42584.service: Deactivated successfully. Mar 7 01:29:17.164987 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:29:17.205997 systemd-logind[1464]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:29:17.219968 systemd-logind[1464]: Removed session 25. Mar 7 01:29:22.309260 systemd[1]: Started sshd@25-10.0.0.48:22-10.0.0.1:47600.service - OpenSSH per-connection server daemon (10.0.0.1:47600). Mar 7 01:29:22.722853 sshd[4817]: Accepted publickey for core from 10.0.0.1 port 47600 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:22.717676 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:22.751810 systemd-logind[1464]: New session 26 of user core. Mar 7 01:29:22.816035 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:29:23.830747 kubelet[2672]: E0307 01:29:23.819139 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:24.171547 sshd[4817]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:24.305808 systemd[1]: sshd@25-10.0.0.48:22-10.0.0.1:47600.service: Deactivated successfully. Mar 7 01:29:24.429035 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:29:24.551951 systemd-logind[1464]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:29:24.651204 systemd-logind[1464]: Removed session 26. Mar 7 01:29:27.841045 kubelet[2672]: E0307 01:29:27.838534 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:29.388971 systemd[1]: Started sshd@26-10.0.0.48:22-10.0.0.1:47726.service - OpenSSH per-connection server daemon (10.0.0.1:47726). Mar 7 01:29:39.361260 sshd[4851]: Accepted publickey for core from 10.0.0.1 port 47726 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:39.365959 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:39.466764 systemd-logind[1464]: New session 27 of user core. Mar 7 01:29:39.517600 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:29:39.711262 kubelet[2672]: E0307 01:29:39.698628 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:39.711262 kubelet[2672]: E0307 01:29:39.701392 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:40.974246 kubelet[2672]: E0307 01:29:40.831910 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:41.352278 update_engine[1470]: I20260307 01:29:41.319393 1470 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 01:29:41.352278 update_engine[1470]: I20260307 01:29:41.329494 1470 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 01:29:41.357931 update_engine[1470]: I20260307 01:29:41.357628 1470 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 01:29:41.374495 update_engine[1470]: I20260307 01:29:41.374444 1470 omaha_request_params.cc:62] Current group set to lts Mar 7 01:29:41.401039 update_engine[1470]: I20260307 01:29:41.394474 1470 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 01:29:41.401039 update_engine[1470]: I20260307 01:29:41.394532 1470 update_attempter.cc:643] Scheduling an action processor start. Mar 7 01:29:41.401039 update_engine[1470]: I20260307 01:29:41.394759 1470 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:29:41.401039 update_engine[1470]: I20260307 01:29:41.396606 1470 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 01:29:41.401039 update_engine[1470]: I20260307 01:29:41.398729 1470 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:29:41.401039 update_engine[1470]: I20260307 01:29:41.398761 1470 omaha_request_action.cc:272] Request: Mar 7 01:29:41.401039 update_engine[1470]: Mar 7 01:29:41.401039 update_engine[1470]: Mar 7 01:29:41.401039 update_engine[1470]: Mar 7 01:29:41.401039 update_engine[1470]: Mar 7 01:29:41.401039 update_engine[1470]: Mar 7 01:29:41.401039 update_engine[1470]: Mar 7 01:29:41.401039 update_engine[1470]: Mar 7 01:29:41.401039 update_engine[1470]: Mar 7 01:29:41.401039 update_engine[1470]: I20260307 01:29:41.398777 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:29:41.410222 sshd[4851]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:41.425667 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 01:29:41.443438 update_engine[1470]: I20260307 01:29:41.440733 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:29:41.443438 update_engine[1470]: I20260307 01:29:41.441475 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:29:41.462047 systemd-logind[1464]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:29:41.468511 systemd[1]: sshd@26-10.0.0.48:22-10.0.0.1:47726.service: Deactivated successfully. Mar 7 01:29:41.476282 update_engine[1470]: E20260307 01:29:41.474211 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:29:41.476282 update_engine[1470]: I20260307 01:29:41.474457 1470 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 01:29:41.474642 systemd[1]: sshd@26-10.0.0.48:22-10.0.0.1:47726.service: Consumed 1.008s CPU time. Mar 7 01:29:41.512805 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:29:41.530327 systemd-logind[1464]: Removed session 27. Mar 7 01:29:46.433177 systemd[1]: Started sshd@27-10.0.0.48:22-10.0.0.1:38204.service - OpenSSH per-connection server daemon (10.0.0.1:38204). Mar 7 01:29:46.582460 sshd[4910]: Accepted publickey for core from 10.0.0.1 port 38204 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:46.591446 sshd[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:46.623907 systemd-logind[1464]: New session 28 of user core. Mar 7 01:29:46.639969 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:29:47.199587 sshd[4910]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:47.217937 systemd[1]: sshd@27-10.0.0.48:22-10.0.0.1:38204.service: Deactivated successfully. Mar 7 01:29:47.252033 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:29:47.260263 systemd-logind[1464]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:29:47.268222 systemd-logind[1464]: Removed session 28. Mar 7 01:29:51.307826 update_engine[1470]: I20260307 01:29:51.305579 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:29:51.307826 update_engine[1470]: I20260307 01:29:51.307051 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:29:51.310858 update_engine[1470]: I20260307 01:29:51.310316 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:29:51.331273 update_engine[1470]: E20260307 01:29:51.329984 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:29:51.331273 update_engine[1470]: I20260307 01:29:51.330667 1470 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 01:29:52.247792 systemd[1]: Started sshd@28-10.0.0.48:22-10.0.0.1:40540.service - OpenSSH per-connection server daemon (10.0.0.1:40540). Mar 7 01:29:52.498153 sshd[4947]: Accepted publickey for core from 10.0.0.1 port 40540 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:52.501441 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:52.551242 systemd-logind[1464]: New session 29 of user core. Mar 7 01:29:53.774522 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 01:29:54.914995 kubelet[2672]: E0307 01:29:54.914932 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.127s" Mar 7 01:29:56.001804 sshd[4947]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:56.045421 systemd[1]: sshd@28-10.0.0.48:22-10.0.0.1:40540.service: Deactivated successfully. Mar 7 01:29:56.104667 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 01:29:56.114430 systemd-logind[1464]: Session 29 logged out. Waiting for processes to exit. Mar 7 01:29:56.128916 systemd-logind[1464]: Removed session 29. Mar 7 01:29:59.801941 kubelet[2672]: E0307 01:29:59.800402 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:01.059673 systemd[1]: Started sshd@29-10.0.0.48:22-10.0.0.1:46730.service - OpenSSH per-connection server daemon (10.0.0.1:46730). Mar 7 01:30:01.307200 update_engine[1470]: I20260307 01:30:01.306188 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:30:01.307200 update_engine[1470]: I20260307 01:30:01.306787 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:30:01.307200 update_engine[1470]: I20260307 01:30:01.307140 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:30:01.337880 update_engine[1470]: E20260307 01:30:01.337528 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:30:01.337880 update_engine[1470]: I20260307 01:30:01.337690 1470 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 01:30:01.367458 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 46730 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:01.371249 sshd[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:01.441466 systemd-logind[1464]: New session 30 of user core. Mar 7 01:30:01.505696 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 01:30:02.248443 sshd[5000]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:02.287160 systemd[1]: sshd@29-10.0.0.48:22-10.0.0.1:46730.service: Deactivated successfully. Mar 7 01:30:02.300164 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 01:30:02.349238 systemd-logind[1464]: Session 30 logged out. Waiting for processes to exit. Mar 7 01:30:02.353576 systemd-logind[1464]: Removed session 30. Mar 7 01:30:07.351490 systemd[1]: Started sshd@30-10.0.0.48:22-10.0.0.1:46752.service - OpenSSH per-connection server daemon (10.0.0.1:46752). Mar 7 01:30:07.676434 sshd[5039]: Accepted publickey for core from 10.0.0.1 port 46752 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:07.733712 sshd[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:07.829901 systemd-logind[1464]: New session 31 of user core. Mar 7 01:30:07.846535 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 01:30:08.538558 sshd[5039]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:08.560881 systemd-logind[1464]: Session 31 logged out. Waiting for processes to exit. Mar 7 01:30:08.578272 systemd[1]: sshd@30-10.0.0.48:22-10.0.0.1:46752.service: Deactivated successfully. Mar 7 01:30:08.604756 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 01:30:08.629666 systemd-logind[1464]: Removed session 31. Mar 7 01:30:11.314699 update_engine[1470]: I20260307 01:30:11.311954 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:30:11.314699 update_engine[1470]: I20260307 01:30:11.312466 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:30:11.314699 update_engine[1470]: I20260307 01:30:11.314186 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:30:11.346860 update_engine[1470]: E20260307 01:30:11.342703 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.342866 1470 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.342921 1470 omaha_request_action.cc:617] Omaha request response: Mar 7 01:30:11.346860 update_engine[1470]: E20260307 01:30:11.343044 1470 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.343169 1470 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.343185 1470 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.343196 1470 update_attempter.cc:306] Processing Done. Mar 7 01:30:11.346860 update_engine[1470]: E20260307 01:30:11.343266 1470 update_attempter.cc:619] Update failed. Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.343302 1470 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.343313 1470 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.343325 1470 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.343602 1470 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.343899 1470 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:30:11.346860 update_engine[1470]: I20260307 01:30:11.343997 1470 omaha_request_action.cc:272] Request: Mar 7 01:30:11.346860 update_engine[1470]: Mar 7 01:30:11.346860 update_engine[1470]: Mar 7 01:30:11.347603 update_engine[1470]: Mar 7 01:30:11.347603 update_engine[1470]: Mar 7 01:30:11.347603 update_engine[1470]: Mar 7 01:30:11.347603 update_engine[1470]: Mar 7 01:30:11.347603 update_engine[1470]: I20260307 01:30:11.344013 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:30:11.347603 update_engine[1470]: I20260307 01:30:11.344897 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:30:11.347603 update_engine[1470]: I20260307 01:30:11.345247 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:30:11.353510 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 01:30:11.375877 update_engine[1470]: E20260307 01:30:11.373435 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:30:11.375877 update_engine[1470]: I20260307 01:30:11.373577 1470 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:30:11.375877 update_engine[1470]: I20260307 01:30:11.373593 1470 omaha_request_action.cc:617] Omaha request response: Mar 7 01:30:11.375877 update_engine[1470]: I20260307 01:30:11.373608 1470 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:30:11.375877 update_engine[1470]: I20260307 01:30:11.373620 1470 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:30:11.375877 update_engine[1470]: I20260307 01:30:11.373629 1470 update_attempter.cc:306] Processing Done. Mar 7 01:30:11.375877 update_engine[1470]: I20260307 01:30:11.373641 1470 update_attempter.cc:310] Error event sent. Mar 7 01:30:11.375877 update_engine[1470]: I20260307 01:30:11.373701 1470 update_check_scheduler.cc:74] Next update check in 47m8s Mar 7 01:30:11.380570 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 01:30:13.654901 systemd[1]: Started sshd@31-10.0.0.48:22-10.0.0.1:34296.service - OpenSSH per-connection server daemon (10.0.0.1:34296). Mar 7 01:30:13.957931 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 34296 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:14.003483 sshd[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:14.056049 systemd-logind[1464]: New session 32 of user core. Mar 7 01:30:14.108327 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 01:30:14.820963 sshd[5075]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:14.835294 systemd[1]: sshd@31-10.0.0.48:22-10.0.0.1:34296.service: Deactivated successfully. Mar 7 01:30:14.839044 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 01:30:14.852859 systemd-logind[1464]: Session 32 logged out. Waiting for processes to exit. Mar 7 01:30:14.859842 systemd-logind[1464]: Removed session 32. Mar 7 01:30:19.915951 systemd[1]: Started sshd@32-10.0.0.48:22-10.0.0.1:34444.service - OpenSSH per-connection server daemon (10.0.0.1:34444). Mar 7 01:30:20.259115 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 34444 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:20.267162 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:20.300259 systemd-logind[1464]: New session 33 of user core. Mar 7 01:30:20.315990 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 01:30:23.752901 kubelet[2672]: E0307 01:30:23.752319 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.943s" Mar 7 01:30:25.484800 sshd[5112]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:25.531654 systemd[1]: sshd@32-10.0.0.48:22-10.0.0.1:34444.service: Deactivated successfully. Mar 7 01:30:25.542305 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 01:30:25.542873 systemd[1]: session-33.scope: Consumed 2.125s CPU time. Mar 7 01:30:25.551820 systemd-logind[1464]: Session 33 logged out. Waiting for processes to exit. Mar 7 01:30:25.563347 systemd-logind[1464]: Removed session 33. Mar 7 01:30:28.265933 kubelet[2672]: E0307 01:30:28.247273 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:36.233611 systemd[1]: Started sshd@33-10.0.0.48:22-10.0.0.1:41028.service - OpenSSH per-connection server daemon (10.0.0.1:41028). Mar 7 01:30:37.925236 kubelet[2672]: E0307 01:30:37.911044 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.702s" Mar 7 01:30:37.939870 kubelet[2672]: E0307 01:30:37.929007 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:37.941248 kubelet[2672]: E0307 01:30:37.941215 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:38.219222 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 41028 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:38.261437 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:38.274315 systemd-logind[1464]: New session 34 of user core. Mar 7 01:30:38.312908 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 7 01:30:40.224445 sshd[5149]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:40.250812 systemd-logind[1464]: Session 34 logged out. Waiting for processes to exit. Mar 7 01:30:40.256975 systemd[1]: sshd@33-10.0.0.48:22-10.0.0.1:41028.service: Deactivated successfully. Mar 7 01:30:40.341016 systemd[1]: session-34.scope: Deactivated successfully. Mar 7 01:30:40.374796 systemd-logind[1464]: Removed session 34. Mar 7 01:30:43.807776 kubelet[2672]: E0307 01:30:43.807135 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:44.807281 kubelet[2672]: E0307 01:30:44.804886 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:45.369198 systemd[1]: Started sshd@34-10.0.0.48:22-10.0.0.1:46880.service - OpenSSH per-connection server daemon (10.0.0.1:46880). Mar 7 01:30:45.624959 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 46880 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:45.649876 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:45.713987 systemd-logind[1464]: New session 35 of user core. Mar 7 01:30:45.753868 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 7 01:30:46.538684 sshd[5203]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:46.589308 systemd[1]: sshd@34-10.0.0.48:22-10.0.0.1:46880.service: Deactivated successfully. Mar 7 01:30:46.600289 systemd[1]: session-35.scope: Deactivated successfully. Mar 7 01:30:46.606953 systemd-logind[1464]: Session 35 logged out. Waiting for processes to exit. Mar 7 01:30:46.611542 systemd-logind[1464]: Removed session 35. Mar 7 01:30:51.622802 systemd[1]: Started sshd@35-10.0.0.48:22-10.0.0.1:46166.service - OpenSSH per-connection server daemon (10.0.0.1:46166). Mar 7 01:30:51.771751 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 46166 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:51.775682 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:51.822155 systemd-logind[1464]: New session 36 of user core. Mar 7 01:30:51.844042 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 7 01:30:52.728413 sshd[5242]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:52.759703 systemd-logind[1464]: Session 36 logged out. Waiting for processes to exit. Mar 7 01:30:52.761238 systemd[1]: sshd@35-10.0.0.48:22-10.0.0.1:46166.service: Deactivated successfully. Mar 7 01:30:52.790564 systemd[1]: session-36.scope: Deactivated successfully. Mar 7 01:30:52.803423 systemd-logind[1464]: Removed session 36. Mar 7 01:30:57.820249 systemd[1]: Started sshd@36-10.0.0.48:22-10.0.0.1:46172.service - OpenSSH per-connection server daemon (10.0.0.1:46172). Mar 7 01:30:58.009145 sshd[5277]: Accepted publickey for core from 10.0.0.1 port 46172 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:58.024997 sshd[5277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:58.056179 systemd-logind[1464]: New session 37 of user core. Mar 7 01:30:58.082552 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 7 01:30:59.021730 sshd[5277]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:59.046827 systemd[1]: sshd@36-10.0.0.48:22-10.0.0.1:46172.service: Deactivated successfully. Mar 7 01:30:59.065493 systemd[1]: session-37.scope: Deactivated successfully. Mar 7 01:30:59.083797 systemd-logind[1464]: Session 37 logged out. Waiting for processes to exit. Mar 7 01:30:59.102758 systemd-logind[1464]: Removed session 37. Mar 7 01:31:03.833181 kubelet[2672]: E0307 01:31:03.829468 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:03.833181 kubelet[2672]: E0307 01:31:03.829600 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:04.061980 systemd[1]: Started sshd@37-10.0.0.48:22-10.0.0.1:35052.service - OpenSSH per-connection server daemon (10.0.0.1:35052). Mar 7 01:31:04.204960 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 35052 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:04.216421 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:04.253807 systemd-logind[1464]: New session 38 of user core. Mar 7 01:31:04.291380 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 7 01:31:04.952328 sshd[5333]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:05.015597 systemd[1]: sshd@37-10.0.0.48:22-10.0.0.1:35052.service: Deactivated successfully. Mar 7 01:31:05.056692 systemd[1]: session-38.scope: Deactivated successfully. Mar 7 01:31:05.070399 systemd-logind[1464]: Session 38 logged out. Waiting for processes to exit. Mar 7 01:31:05.114054 systemd[1]: Started sshd@38-10.0.0.48:22-10.0.0.1:35072.service - OpenSSH per-connection server daemon (10.0.0.1:35072). Mar 7 01:31:05.143772 systemd-logind[1464]: Removed session 38. Mar 7 01:31:05.402606 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 35072 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:05.418359 sshd[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:05.526144 systemd-logind[1464]: New session 39 of user core. Mar 7 01:31:05.537487 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 7 01:31:06.753423 sshd[5349]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:06.831309 systemd[1]: Started sshd@39-10.0.0.48:22-10.0.0.1:35074.service - OpenSSH per-connection server daemon (10.0.0.1:35074). Mar 7 01:31:06.835834 systemd[1]: sshd@38-10.0.0.48:22-10.0.0.1:35072.service: Deactivated successfully. Mar 7 01:31:06.848742 systemd[1]: session-39.scope: Deactivated successfully. Mar 7 01:31:06.858687 systemd-logind[1464]: Session 39 logged out. Waiting for processes to exit. Mar 7 01:31:06.883951 systemd-logind[1464]: Removed session 39. Mar 7 01:31:07.022974 sshd[5360]: Accepted publickey for core from 10.0.0.1 port 35074 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:07.041546 sshd[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:07.071254 systemd-logind[1464]: New session 40 of user core. Mar 7 01:31:07.092933 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 7 01:31:10.302324 sshd[5360]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:10.345301 systemd[1]: sshd@39-10.0.0.48:22-10.0.0.1:35074.service: Deactivated successfully. Mar 7 01:31:10.355907 systemd[1]: session-40.scope: Deactivated successfully. Mar 7 01:31:10.356350 systemd[1]: session-40.scope: Consumed 1.261s CPU time. Mar 7 01:31:10.360333 systemd-logind[1464]: Session 40 logged out. Waiting for processes to exit. Mar 7 01:31:10.393377 systemd[1]: Started sshd@40-10.0.0.48:22-10.0.0.1:45364.service - OpenSSH per-connection server daemon (10.0.0.1:45364). Mar 7 01:31:10.397584 systemd-logind[1464]: Removed session 40. Mar 7 01:31:10.534781 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 45364 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:10.547585 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:10.591361 systemd-logind[1464]: New session 41 of user core. Mar 7 01:31:10.631352 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 7 01:31:11.636978 sshd[5403]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:11.695190 systemd[1]: sshd@40-10.0.0.48:22-10.0.0.1:45364.service: Deactivated successfully. Mar 7 01:31:11.717253 systemd[1]: session-41.scope: Deactivated successfully. Mar 7 01:31:11.736230 systemd-logind[1464]: Session 41 logged out. Waiting for processes to exit. Mar 7 01:31:11.763929 systemd[1]: Started sshd@41-10.0.0.48:22-10.0.0.1:45368.service - OpenSSH per-connection server daemon (10.0.0.1:45368). Mar 7 01:31:11.805202 systemd-logind[1464]: Removed session 41. Mar 7 01:31:11.964419 sshd[5418]: Accepted publickey for core from 10.0.0.1 port 45368 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:11.966257 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:12.016649 systemd-logind[1464]: New session 42 of user core. Mar 7 01:31:12.040013 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 7 01:31:12.586559 sshd[5418]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:12.621518 systemd[1]: sshd@41-10.0.0.48:22-10.0.0.1:45368.service: Deactivated successfully. Mar 7 01:31:12.629721 systemd[1]: session-42.scope: Deactivated successfully. Mar 7 01:31:12.645717 systemd-logind[1464]: Session 42 logged out. Waiting for processes to exit. Mar 7 01:31:12.651049 systemd-logind[1464]: Removed session 42. Mar 7 01:31:17.639687 systemd[1]: Started sshd@42-10.0.0.48:22-10.0.0.1:45380.service - OpenSSH per-connection server daemon (10.0.0.1:45380). Mar 7 01:31:17.728845 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 45380 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:17.737993 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:17.769744 systemd-logind[1464]: New session 43 of user core. Mar 7 01:31:17.789559 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 7 01:31:18.126868 sshd[5453]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:18.148323 systemd[1]: sshd@42-10.0.0.48:22-10.0.0.1:45380.service: Deactivated successfully. Mar 7 01:31:18.154973 systemd[1]: session-43.scope: Deactivated successfully. Mar 7 01:31:18.162805 systemd-logind[1464]: Session 43 logged out. Waiting for processes to exit. Mar 7 01:31:18.170557 systemd-logind[1464]: Removed session 43. Mar 7 01:31:23.180471 systemd[1]: Started sshd@43-10.0.0.48:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Mar 7 01:31:23.398267 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:23.413928 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:23.540908 systemd-logind[1464]: New session 44 of user core. Mar 7 01:31:23.624780 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 7 01:31:24.345851 sshd[5490]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:24.390132 systemd[1]: sshd@43-10.0.0.48:22-10.0.0.1:42012.service: Deactivated successfully. Mar 7 01:31:24.400979 systemd[1]: session-44.scope: Deactivated successfully. Mar 7 01:31:24.410950 systemd-logind[1464]: Session 44 logged out. Waiting for processes to exit. Mar 7 01:31:24.415980 systemd-logind[1464]: Removed session 44. Mar 7 01:31:29.416810 systemd[1]: Started sshd@44-10.0.0.48:22-10.0.0.1:42034.service - OpenSSH per-connection server daemon (10.0.0.1:42034). Mar 7 01:31:29.567147 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 42034 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:29.569792 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:29.591237 systemd-logind[1464]: New session 45 of user core. Mar 7 01:31:29.600187 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 7 01:31:30.235744 sshd[5532]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:30.254211 systemd[1]: sshd@44-10.0.0.48:22-10.0.0.1:42034.service: Deactivated successfully. Mar 7 01:31:30.261258 systemd[1]: session-45.scope: Deactivated successfully. Mar 7 01:31:30.291898 systemd-logind[1464]: Session 45 logged out. Waiting for processes to exit. Mar 7 01:31:30.301813 systemd-logind[1464]: Removed session 45. Mar 7 01:31:35.342363 systemd[1]: Started sshd@45-10.0.0.48:22-10.0.0.1:56430.service - OpenSSH per-connection server daemon (10.0.0.1:56430). Mar 7 01:31:35.537589 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 56430 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:35.531922 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:35.599253 systemd-logind[1464]: New session 46 of user core. Mar 7 01:31:35.610376 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 7 01:31:36.342530 sshd[5570]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:36.369483 systemd[1]: sshd@45-10.0.0.48:22-10.0.0.1:56430.service: Deactivated successfully. Mar 7 01:31:36.383256 systemd[1]: session-46.scope: Deactivated successfully. Mar 7 01:31:36.391838 systemd-logind[1464]: Session 46 logged out. Waiting for processes to exit. Mar 7 01:31:36.396623 systemd-logind[1464]: Removed session 46. Mar 7 01:31:37.812861 kubelet[2672]: E0307 01:31:37.803438 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:41.495716 systemd[1]: Started sshd@46-10.0.0.48:22-10.0.0.1:50286.service - OpenSSH per-connection server daemon (10.0.0.1:50286). Mar 7 01:31:41.860714 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 50286 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:41.923887 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:42.026966 systemd-logind[1464]: New session 47 of user core. Mar 7 01:31:42.081576 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 7 01:31:43.053784 sshd[5616]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:43.119358 systemd[1]: sshd@46-10.0.0.48:22-10.0.0.1:50286.service: Deactivated successfully. Mar 7 01:31:43.154546 systemd[1]: session-47.scope: Deactivated successfully. Mar 7 01:31:43.159293 systemd-logind[1464]: Session 47 logged out. Waiting for processes to exit. Mar 7 01:31:43.174988 systemd-logind[1464]: Removed session 47. Mar 7 01:31:46.806001 kubelet[2672]: E0307 01:31:46.803897 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:48.139742 systemd[1]: Started sshd@47-10.0.0.48:22-10.0.0.1:50294.service - OpenSSH per-connection server daemon (10.0.0.1:50294). Mar 7 01:31:48.248544 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 50294 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:48.265207 sshd[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:48.326684 systemd-logind[1464]: New session 48 of user core. Mar 7 01:31:48.340319 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 7 01:31:48.792048 sshd[5652]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:48.825709 systemd[1]: sshd@47-10.0.0.48:22-10.0.0.1:50294.service: Deactivated successfully. Mar 7 01:31:48.837425 systemd[1]: session-48.scope: Deactivated successfully. Mar 7 01:31:48.843271 systemd-logind[1464]: Session 48 logged out. Waiting for processes to exit. Mar 7 01:31:48.845593 systemd-logind[1464]: Removed session 48. Mar 7 01:31:53.871642 systemd[1]: Started sshd@48-10.0.0.48:22-10.0.0.1:45294.service - OpenSSH per-connection server daemon (10.0.0.1:45294). Mar 7 01:31:54.051359 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 45294 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:54.060563 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:54.120723 systemd-logind[1464]: New session 49 of user core. Mar 7 01:31:54.183894 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 7 01:32:03.534277 kubelet[2672]: E0307 01:32:03.529907 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:03.836855 kubelet[2672]: E0307 01:32:03.836818 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.727s" Mar 7 01:32:03.855943 kubelet[2672]: E0307 01:32:03.855897 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:04.171944 sshd[5686]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:04.230715 systemd[1]: sshd@48-10.0.0.48:22-10.0.0.1:45294.service: Deactivated successfully. Mar 7 01:32:04.270579 systemd[1]: session-49.scope: Deactivated successfully. Mar 7 01:32:04.289746 systemd[1]: session-49.scope: Consumed 2.275s CPU time. Mar 7 01:32:04.310694 systemd-logind[1464]: Session 49 logged out. Waiting for processes to exit. Mar 7 01:32:04.329257 systemd-logind[1464]: Removed session 49. Mar 7 01:32:05.823924 kubelet[2672]: E0307 01:32:05.823140 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:09.266570 systemd[1]: Started sshd@49-10.0.0.48:22-10.0.0.1:36808.service - OpenSSH per-connection server daemon (10.0.0.1:36808). Mar 7 01:32:09.370294 sshd[5731]: Accepted publickey for core from 10.0.0.1 port 36808 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:32:09.395891 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:09.426315 systemd-logind[1464]: New session 50 of user core. Mar 7 01:32:09.434614 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 7 01:32:09.814823 kubelet[2672]: E0307 01:32:09.806180 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:10.066847 sshd[5731]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:10.098818 systemd[1]: sshd@49-10.0.0.48:22-10.0.0.1:36808.service: Deactivated successfully. Mar 7 01:32:10.107043 systemd[1]: session-50.scope: Deactivated successfully. Mar 7 01:32:10.115186 systemd-logind[1464]: Session 50 logged out. Waiting for processes to exit. Mar 7 01:32:10.151387 systemd-logind[1464]: Removed session 50. Mar 7 01:32:15.212730 systemd[1]: Started sshd@50-10.0.0.48:22-10.0.0.1:50068.service - OpenSSH per-connection server daemon (10.0.0.1:50068). Mar 7 01:32:15.353664 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 50068 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:32:15.357319 sshd[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:15.396166 systemd-logind[1464]: New session 51 of user core. Mar 7 01:32:15.410581 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 7 01:32:15.997002 sshd[5774]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:16.038034 systemd[1]: sshd@50-10.0.0.48:22-10.0.0.1:50068.service: Deactivated successfully. Mar 7 01:32:16.052769 systemd[1]: session-51.scope: Deactivated successfully. Mar 7 01:32:16.062600 systemd-logind[1464]: Session 51 logged out. Waiting for processes to exit. Mar 7 01:32:16.077476 systemd-logind[1464]: Removed session 51. Mar 7 01:32:21.123705 systemd[1]: Started sshd@51-10.0.0.48:22-10.0.0.1:43188.service - OpenSSH per-connection server daemon (10.0.0.1:43188). Mar 7 01:32:21.301053 sshd[5810]: Accepted publickey for core from 10.0.0.1 port 43188 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:32:21.307486 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:21.590240 systemd-logind[1464]: New session 52 of user core. Mar 7 01:32:21.615723 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 7 01:32:25.402640 kubelet[2672]: E0307 01:32:25.383410 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.512s" Mar 7 01:32:26.246692 sshd[5810]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:26.304370 systemd-logind[1464]: Session 52 logged out. Waiting for processes to exit. Mar 7 01:32:26.306694 systemd[1]: sshd@51-10.0.0.48:22-10.0.0.1:43188.service: Deactivated successfully. Mar 7 01:32:26.335637 systemd[1]: session-52.scope: Deactivated successfully. Mar 7 01:32:26.336323 systemd[1]: session-52.scope: Consumed 1.422s CPU time. Mar 7 01:32:26.342247 systemd-logind[1464]: Removed session 52. Mar 7 01:32:27.846225 kubelet[2672]: E0307 01:32:27.838600 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:31.333473 systemd[1]: Started sshd@52-10.0.0.48:22-10.0.0.1:36480.service - OpenSSH per-connection server daemon (10.0.0.1:36480). Mar 7 01:32:31.605376 sshd[5851]: Accepted publickey for core from 10.0.0.1 port 36480 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:32:31.622643 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:31.650975 systemd-logind[1464]: New session 53 of user core. Mar 7 01:32:31.663477 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 7 01:32:32.345275 sshd[5851]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:32.369408 systemd[1]: sshd@52-10.0.0.48:22-10.0.0.1:36480.service: Deactivated successfully. Mar 7 01:32:32.386630 systemd[1]: session-53.scope: Deactivated successfully. Mar 7 01:32:32.425153 systemd-logind[1464]: Session 53 logged out. Waiting for processes to exit. Mar 7 01:32:32.433525 systemd-logind[1464]: Removed session 53. Mar 7 01:32:38.753193 systemd[1]: Started sshd@53-10.0.0.48:22-10.0.0.1:36488.service - OpenSSH per-connection server daemon (10.0.0.1:36488). Mar 7 01:32:39.802853 kubelet[2672]: E0307 01:32:39.753522 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.441s" Mar 7 01:32:41.367370 sshd[5884]: Accepted publickey for core from 10.0.0.1 port 36488 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:32:41.530518 sshd[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:41.866757 systemd-logind[1464]: New session 54 of user core. Mar 7 01:32:42.073693 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 7 01:32:42.560740 kubelet[2672]: E0307 01:32:42.542665 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.203s" Mar 7 01:32:43.066774 sshd[5884]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:43.138379 systemd[1]: sshd@53-10.0.0.48:22-10.0.0.1:36488.service: Deactivated successfully. Mar 7 01:32:43.157045 systemd[1]: session-54.scope: Deactivated successfully. Mar 7 01:32:43.170283 systemd-logind[1464]: Session 54 logged out. Waiting for processes to exit. Mar 7 01:32:43.205671 systemd-logind[1464]: Removed session 54. Mar 7 01:32:48.133530 systemd[1]: Started sshd@54-10.0.0.48:22-10.0.0.1:39746.service - OpenSSH per-connection server daemon (10.0.0.1:39746). Mar 7 01:32:48.304167 sshd[5939]: Accepted publickey for core from 10.0.0.1 port 39746 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:32:48.311854 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:48.379814 systemd-logind[1464]: New session 55 of user core. Mar 7 01:32:48.414971 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 7 01:32:49.136910 sshd[5939]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:49.152769 systemd[1]: sshd@54-10.0.0.48:22-10.0.0.1:39746.service: Deactivated successfully. Mar 7 01:32:49.172743 systemd[1]: session-55.scope: Deactivated successfully. Mar 7 01:32:49.195013 systemd-logind[1464]: Session 55 logged out. Waiting for processes to exit. Mar 7 01:32:49.203007 systemd-logind[1464]: Removed session 55. Mar 7 01:32:55.998623 systemd[1]: Started sshd@55-10.0.0.48:22-10.0.0.1:49458.service - OpenSSH per-connection server daemon (10.0.0.1:49458). Mar 7 01:32:56.128171 kubelet[2672]: E0307 01:32:56.128045 2672 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.06s" Mar 7 01:32:56.813622 sshd[5957]: Accepted publickey for core from 10.0.0.1 port 49458 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:32:56.827560 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:56.856192 systemd-logind[1464]: New session 56 of user core. Mar 7 01:32:56.890348 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 7 01:32:58.241002 sshd[5957]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:58.269325 systemd[1]: sshd@55-10.0.0.48:22-10.0.0.1:49458.service: Deactivated successfully. Mar 7 01:32:58.316340 systemd[1]: session-56.scope: Deactivated successfully. Mar 7 01:32:58.320251 systemd-logind[1464]: Session 56 logged out. Waiting for processes to exit. Mar 7 01:32:58.332050 systemd-logind[1464]: Removed session 56. Mar 7 01:33:03.487503 systemd[1]: Started sshd@56-10.0.0.48:22-10.0.0.1:59484.service - OpenSSH per-connection server daemon (10.0.0.1:59484). Mar 7 01:33:03.798488 sshd[6008]: Accepted publickey for core from 10.0.0.1 port 59484 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:33:03.808754 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:03.856558 systemd-logind[1464]: New session 57 of user core. Mar 7 01:33:03.866343 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 7 01:33:04.472152 sshd[6008]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:04.559216 systemd[1]: sshd@56-10.0.0.48:22-10.0.0.1:59484.service: Deactivated successfully. Mar 7 01:33:04.590950 systemd[1]: session-57.scope: Deactivated successfully. Mar 7 01:33:04.601574 systemd-logind[1464]: Session 57 logged out. Waiting for processes to exit. Mar 7 01:33:04.605951 systemd-logind[1464]: Removed session 57. Mar 7 01:33:04.805965 kubelet[2672]: E0307 01:33:04.800755 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:33:04.805965 kubelet[2672]: E0307 01:33:04.805532 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:33:09.560294 systemd[1]: Started sshd@57-10.0.0.48:22-10.0.0.1:59490.service - OpenSSH per-connection server daemon (10.0.0.1:59490). Mar 7 01:33:09.720254 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 59490 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:33:09.732299 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:09.761010 systemd-logind[1464]: New session 58 of user core. Mar 7 01:33:09.792367 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 7 01:33:09.801447 kubelet[2672]: E0307 01:33:09.799034 2672 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:33:10.156987 sshd[6043]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:10.186434 systemd[1]: sshd@57-10.0.0.48:22-10.0.0.1:59490.service: Deactivated successfully. Mar 7 01:33:10.194423 systemd[1]: session-58.scope: Deactivated successfully. Mar 7 01:33:10.197923 systemd-logind[1464]: Session 58 logged out. Waiting for processes to exit. Mar 7 01:33:10.204455 systemd-logind[1464]: Removed session 58.