Mar 10 00:57:30.467659 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 22:55:40 -00 2026 Mar 10 00:57:30.467683 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 00:57:30.467695 kernel: BIOS-provided physical RAM map: Mar 10 00:57:30.467701 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 10 00:57:30.467707 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 10 00:57:30.467712 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 10 00:57:30.467719 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 10 00:57:30.467725 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 10 00:57:30.467731 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 00:57:30.467739 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 10 00:57:30.467745 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 00:57:30.467751 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 10 00:57:30.467784 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 00:57:30.467791 kernel: NX (Execute Disable) protection: active Mar 10 00:57:30.467797 kernel: APIC: Static calls initialized Mar 10 00:57:30.467830 kernel: SMBIOS 2.8 present. Mar 10 00:57:30.467837 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 10 00:57:30.467843 kernel: Hypervisor detected: KVM Mar 10 00:57:30.467849 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 00:57:30.467855 kernel: kvm-clock: using sched offset of 6808058623 cycles Mar 10 00:57:30.467861 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 00:57:30.467868 kernel: tsc: Detected 2445.424 MHz processor Mar 10 00:57:30.467874 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 00:57:30.467881 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 00:57:30.467890 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 00:57:30.467896 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 10 00:57:30.467903 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 00:57:30.467909 kernel: Using GB pages for direct mapping Mar 10 00:57:30.467915 kernel: ACPI: Early table checksum verification disabled Mar 10 00:57:30.467921 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 10 00:57:30.467928 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 00:57:30.467934 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 00:57:30.467940 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 00:57:30.467949 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 10 00:57:30.467955 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 00:57:30.467962 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 00:57:30.467968 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 00:57:30.467974 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 00:57:30.467980 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 10 00:57:30.467986 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 10 00:57:30.467997 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 10 00:57:30.468006 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 10 00:57:30.468012 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 10 00:57:30.468019 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 10 00:57:30.468025 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 10 00:57:30.468032 kernel: No NUMA configuration found Mar 10 00:57:30.468038 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 10 00:57:30.468047 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 10 00:57:30.468054 kernel: Zone ranges: Mar 10 00:57:30.468060 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 00:57:30.468067 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 10 00:57:30.468073 kernel: Normal empty Mar 10 00:57:30.468079 kernel: Movable zone start for each node Mar 10 00:57:30.468086 kernel: Early memory node ranges Mar 10 00:57:30.468092 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 10 00:57:30.468099 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 10 00:57:30.468105 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 10 00:57:30.468114 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 00:57:30.468144 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 10 00:57:30.468151 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 10 00:57:30.468158 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 00:57:30.468164 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 00:57:30.468171 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 00:57:30.468177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 00:57:30.468184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 00:57:30.468190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 00:57:30.468200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 00:57:30.468206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 00:57:30.468213 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 00:57:30.468219 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 00:57:30.468226 kernel: TSC deadline timer available Mar 10 00:57:30.468232 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 10 00:57:30.468238 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 00:57:30.468245 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 00:57:30.468272 kernel: kvm-guest: setup PV sched yield Mar 10 00:57:30.468282 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 10 00:57:30.468289 kernel: Booting paravirtualized kernel on KVM Mar 10 00:57:30.468295 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 00:57:30.468302 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 00:57:30.468308 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 10 00:57:30.468315 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 10 00:57:30.468321 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 00:57:30.468327 kernel: kvm-guest: PV spinlocks enabled Mar 10 00:57:30.468334 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 00:57:30.468344 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 00:57:30.468351 kernel: random: crng init done Mar 10 00:57:30.468357 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 00:57:30.468364 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 00:57:30.468370 kernel: Fallback order for Node 0: 0 Mar 10 00:57:30.468376 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 10 00:57:30.468383 kernel: Policy zone: DMA32 Mar 10 00:57:30.468389 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 00:57:30.468399 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 136884K reserved, 0K cma-reserved) Mar 10 00:57:30.468405 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 00:57:30.468474 kernel: ftrace: allocating 37996 entries in 149 pages Mar 10 00:57:30.468482 kernel: ftrace: allocated 149 pages with 4 groups Mar 10 00:57:30.468488 kernel: Dynamic Preempt: voluntary Mar 10 00:57:30.468495 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 00:57:30.468507 kernel: rcu: RCU event tracing is enabled. Mar 10 00:57:30.468514 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 00:57:30.468521 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 00:57:30.468531 kernel: Rude variant of Tasks RCU enabled. Mar 10 00:57:30.468537 kernel: Tracing variant of Tasks RCU enabled. Mar 10 00:57:30.468544 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 00:57:30.468550 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 00:57:30.468580 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 00:57:30.468587 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 00:57:30.468593 kernel: Console: colour VGA+ 80x25 Mar 10 00:57:30.468599 kernel: printk: console [ttyS0] enabled Mar 10 00:57:30.468635 kernel: ACPI: Core revision 20230628 Mar 10 00:57:30.468642 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 00:57:30.468652 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 00:57:30.468659 kernel: x2apic enabled Mar 10 00:57:30.468665 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 00:57:30.468672 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 00:57:30.468678 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 00:57:30.468685 kernel: kvm-guest: setup PV IPIs Mar 10 00:57:30.468691 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 00:57:30.468710 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 10 00:57:30.468717 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 10 00:57:30.468724 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 00:57:30.468731 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 00:57:30.468740 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 00:57:30.468747 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 00:57:30.468754 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 00:57:30.468761 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 00:57:30.468768 kernel: Speculative Store Bypass: Vulnerable Mar 10 00:57:30.468777 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 00:57:30.468808 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 00:57:30.468816 kernel: active return thunk: srso_alias_return_thunk Mar 10 00:57:30.468823 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 00:57:30.468829 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 00:57:30.468836 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 00:57:30.468843 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 00:57:30.468850 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 00:57:30.468860 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 00:57:30.468867 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 00:57:30.468874 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 00:57:30.468880 kernel: Freeing SMP alternatives memory: 32K Mar 10 00:57:30.468887 kernel: pid_max: default: 32768 minimum: 301 Mar 10 00:57:30.468894 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 00:57:30.468900 kernel: landlock: Up and running. Mar 10 00:57:30.468907 kernel: SELinux: Initializing. Mar 10 00:57:30.468914 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 00:57:30.468924 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 00:57:30.468931 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 00:57:30.468937 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 00:57:30.468944 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 00:57:30.468951 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 00:57:30.468958 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 00:57:30.468964 kernel: signal: max sigframe size: 1776 Mar 10 00:57:30.468993 kernel: rcu: Hierarchical SRCU implementation. Mar 10 00:57:30.469000 kernel: rcu: Max phase no-delay instances is 400. Mar 10 00:57:30.469010 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 00:57:30.469017 kernel: smp: Bringing up secondary CPUs ... Mar 10 00:57:30.469024 kernel: smpboot: x86: Booting SMP configuration: Mar 10 00:57:30.469030 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 00:57:30.469037 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 00:57:30.469044 kernel: smpboot: Max logical packages: 1 Mar 10 00:57:30.469050 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 10 00:57:30.469057 kernel: devtmpfs: initialized Mar 10 00:57:30.469064 kernel: x86/mm: Memory block size: 128MB Mar 10 00:57:30.469074 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 00:57:30.469080 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 00:57:30.469087 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 00:57:30.469094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 00:57:30.469100 kernel: audit: initializing netlink subsys (disabled) Mar 10 00:57:30.469107 kernel: audit: type=2000 audit(1773104247.870:1): state=initialized audit_enabled=0 res=1 Mar 10 00:57:30.469114 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 00:57:30.469121 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 00:57:30.469127 kernel: cpuidle: using governor menu Mar 10 00:57:30.469137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 00:57:30.469144 kernel: dca service started, version 1.12.1 Mar 10 00:57:30.469150 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 10 00:57:30.469157 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 00:57:30.469164 kernel: PCI: Using configuration type 1 for base access Mar 10 00:57:30.469171 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 00:57:30.469178 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 00:57:30.469184 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 00:57:30.469191 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 00:57:30.469201 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 00:57:30.469207 kernel: ACPI: Added _OSI(Module Device) Mar 10 00:57:30.469214 kernel: ACPI: Added _OSI(Processor Device) Mar 10 00:57:30.469221 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 00:57:30.469228 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 00:57:30.469234 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 10 00:57:30.469241 kernel: ACPI: Interpreter enabled Mar 10 00:57:30.469248 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 00:57:30.469254 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 00:57:30.469264 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 00:57:30.469271 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 00:57:30.469277 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 00:57:30.469284 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 00:57:30.469822 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 00:57:30.470024 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 00:57:30.470180 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 00:57:30.470195 kernel: PCI host bridge to bus 0000:00 Mar 10 00:57:30.470482 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 00:57:30.470666 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 00:57:30.470806 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 00:57:30.470941 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 10 00:57:30.471152 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 00:57:30.471295 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 10 00:57:30.471507 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 00:57:30.471833 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 10 00:57:30.472067 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 10 00:57:30.472219 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 10 00:57:30.472366 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 10 00:57:30.472681 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 10 00:57:30.472834 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 00:57:30.473082 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 10 00:57:30.473234 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 10 00:57:30.473380 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 10 00:57:30.473591 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 10 00:57:30.473819 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 10 00:57:30.473968 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 10 00:57:30.474114 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 10 00:57:30.474267 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 10 00:57:30.474548 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 10 00:57:30.474743 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 10 00:57:30.474891 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 10 00:57:30.475034 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 10 00:57:30.475178 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 10 00:57:30.475362 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 10 00:57:30.475582 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 00:57:30.475840 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 10 00:57:30.475991 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 10 00:57:30.476136 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 10 00:57:30.476381 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 10 00:57:30.476594 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 10 00:57:30.476651 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 00:57:30.476658 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 00:57:30.476665 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 00:57:30.476672 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 00:57:30.476679 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 00:57:30.476686 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 00:57:30.476693 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 00:57:30.476699 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 00:57:30.476706 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 00:57:30.476717 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 00:57:30.476723 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 00:57:30.476730 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 00:57:30.476737 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 00:57:30.476744 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 00:57:30.476750 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 00:57:30.476757 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 00:57:30.476764 kernel: iommu: Default domain type: Translated Mar 10 00:57:30.476771 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 00:57:30.476780 kernel: PCI: Using ACPI for IRQ routing Mar 10 00:57:30.476787 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 00:57:30.476794 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 10 00:57:30.476801 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 10 00:57:30.476953 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 00:57:30.477106 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 00:57:30.477251 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 00:57:30.477261 kernel: vgaarb: loaded Mar 10 00:57:30.477273 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 00:57:30.477280 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 00:57:30.477286 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 00:57:30.477293 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 00:57:30.477300 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 00:57:30.477306 kernel: pnp: PnP ACPI init Mar 10 00:57:30.477682 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 00:57:30.477695 kernel: pnp: PnP ACPI: found 6 devices Mar 10 00:57:30.477707 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 00:57:30.477715 kernel: NET: Registered PF_INET protocol family Mar 10 00:57:30.477722 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 00:57:30.477728 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 00:57:30.477735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 00:57:30.477742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 00:57:30.477749 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 00:57:30.477756 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 00:57:30.477763 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 00:57:30.477773 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 00:57:30.477780 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 00:57:30.477786 kernel: NET: Registered PF_XDP protocol family Mar 10 00:57:30.477959 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 00:57:30.478096 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 00:57:30.478230 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 00:57:30.478365 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 10 00:57:30.478567 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 00:57:30.478769 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 10 00:57:30.478786 kernel: PCI: CLS 0 bytes, default 64 Mar 10 00:57:30.478793 kernel: Initialise system trusted keyrings Mar 10 00:57:30.478799 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 00:57:30.478806 kernel: Key type asymmetric registered Mar 10 00:57:30.478813 kernel: Asymmetric key parser 'x509' registered Mar 10 00:57:30.478820 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 10 00:57:30.478827 kernel: io scheduler mq-deadline registered Mar 10 00:57:30.478833 kernel: io scheduler kyber registered Mar 10 00:57:30.478840 kernel: io scheduler bfq registered Mar 10 00:57:30.478850 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 00:57:30.478858 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 00:57:30.478865 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 00:57:30.478872 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 00:57:30.478878 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 00:57:30.478885 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 00:57:30.478892 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 00:57:30.478899 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 00:57:30.478906 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 00:57:30.479139 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 00:57:30.479151 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 10 00:57:30.479293 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 00:57:30.479727 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T00:57:29 UTC (1773104249) Mar 10 00:57:30.479875 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 10 00:57:30.479885 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 00:57:30.479892 kernel: NET: Registered PF_INET6 protocol family Mar 10 00:57:30.479904 kernel: Segment Routing with IPv6 Mar 10 00:57:30.479911 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 00:57:30.479918 kernel: NET: Registered PF_PACKET protocol family Mar 10 00:57:30.479924 kernel: Key type dns_resolver registered Mar 10 00:57:30.479931 kernel: IPI shorthand broadcast: enabled Mar 10 00:57:30.479938 kernel: sched_clock: Marking stable (2335026889, 654644704)->(3217032626, -227361033) Mar 10 00:57:30.479945 kernel: registered taskstats version 1 Mar 10 00:57:30.479952 kernel: Loading compiled-in X.509 certificates Mar 10 00:57:30.479959 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 611e035accba842cc9fafb5ced2ca41a603067aa' Mar 10 00:57:30.479966 kernel: Key type .fscrypt registered Mar 10 00:57:30.479975 kernel: Key type fscrypt-provisioning registered Mar 10 00:57:30.479982 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 00:57:30.479989 kernel: ima: Allocated hash algorithm: sha1 Mar 10 00:57:30.479996 kernel: ima: No architecture policies found Mar 10 00:57:30.480002 kernel: clk: Disabling unused clocks Mar 10 00:57:30.480009 kernel: Freeing unused kernel image (initmem) memory: 42896K Mar 10 00:57:30.480016 kernel: Write protecting the kernel read-only data: 36864k Mar 10 00:57:30.480023 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 10 00:57:30.480032 kernel: Run /init as init process Mar 10 00:57:30.480039 kernel: with arguments: Mar 10 00:57:30.480046 kernel: /init Mar 10 00:57:30.480052 kernel: with environment: Mar 10 00:57:30.480059 kernel: HOME=/ Mar 10 00:57:30.480066 kernel: TERM=linux Mar 10 00:57:30.480074 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 00:57:30.480083 systemd[1]: Detected virtualization kvm. Mar 10 00:57:30.480094 systemd[1]: Detected architecture x86-64. Mar 10 00:57:30.480101 systemd[1]: Running in initrd. Mar 10 00:57:30.480108 systemd[1]: No hostname configured, using default hostname. Mar 10 00:57:30.480115 systemd[1]: Hostname set to . Mar 10 00:57:30.480123 systemd[1]: Initializing machine ID from VM UUID. Mar 10 00:57:30.480130 systemd[1]: Queued start job for default target initrd.target. Mar 10 00:57:30.480137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 00:57:30.480144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 00:57:30.480155 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 00:57:30.480162 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 00:57:30.480170 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 00:57:30.480177 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 00:57:30.480185 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 00:57:30.480193 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 00:57:30.480200 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 00:57:30.480210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 00:57:30.480218 systemd[1]: Reached target paths.target - Path Units. Mar 10 00:57:30.480225 systemd[1]: Reached target slices.target - Slice Units. Mar 10 00:57:30.480232 systemd[1]: Reached target swap.target - Swaps. Mar 10 00:57:30.480254 systemd[1]: Reached target timers.target - Timer Units. Mar 10 00:57:30.480264 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 00:57:30.480275 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 00:57:30.480282 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 00:57:30.480290 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 00:57:30.480297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 00:57:30.480305 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 00:57:30.480312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 00:57:30.480319 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 00:57:30.480327 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 00:57:30.480334 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 00:57:30.480345 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 00:57:30.480352 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 00:57:30.480360 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 00:57:30.480367 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 00:57:30.480374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:57:30.480403 systemd-journald[194]: Collecting audit messages is disabled. Mar 10 00:57:30.480498 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 00:57:30.480506 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 00:57:30.480514 systemd-journald[194]: Journal started Mar 10 00:57:30.480533 systemd-journald[194]: Runtime Journal (/run/log/journal/b7ff6b560a134be5ba76d64455c6e4ae) is 6.0M, max 48.4M, 42.3M free. Mar 10 00:57:30.488711 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 00:57:30.496300 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 00:57:30.503381 systemd-modules-load[195]: Inserted module 'overlay' Mar 10 00:57:30.508719 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 00:57:30.513015 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 00:57:30.522757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 00:57:30.524166 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 00:57:30.557036 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 00:57:30.557091 kernel: Bridge firewalling registered Mar 10 00:57:30.556250 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 10 00:57:30.557592 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 00:57:30.746727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 00:57:30.747579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 00:57:30.780874 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 00:57:30.785197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:57:30.796308 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 00:57:30.804123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 00:57:30.806843 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 00:57:30.837252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:57:30.855714 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 00:57:30.864120 systemd-resolved[222]: Positive Trust Anchors: Mar 10 00:57:30.864163 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 00:57:30.864215 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 00:57:30.907219 dracut-cmdline[233]: dracut-dracut-053 Mar 10 00:57:30.907219 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 00:57:30.867165 systemd-resolved[222]: Defaulting to hostname 'linux'. Mar 10 00:57:30.869190 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 00:57:30.876896 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 00:57:30.975532 kernel: SCSI subsystem initialized Mar 10 00:57:30.987559 kernel: Loading iSCSI transport class v2.0-870. Mar 10 00:57:31.001528 kernel: iscsi: registered transport (tcp) Mar 10 00:57:31.028179 kernel: iscsi: registered transport (qla4xxx) Mar 10 00:57:31.028289 kernel: QLogic iSCSI HBA Driver Mar 10 00:57:31.094603 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 00:57:31.112730 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 00:57:31.147074 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 00:57:31.147130 kernel: device-mapper: uevent: version 1.0.3 Mar 10 00:57:31.150862 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 00:57:31.201572 kernel: raid6: avx2x4 gen() 26540 MB/s Mar 10 00:57:31.219518 kernel: raid6: avx2x2 gen() 27675 MB/s Mar 10 00:57:31.239900 kernel: raid6: avx2x1 gen() 23354 MB/s Mar 10 00:57:31.239942 kernel: raid6: using algorithm avx2x2 gen() 27675 MB/s Mar 10 00:57:31.262508 kernel: raid6: .... xor() 17222 MB/s, rmw enabled Mar 10 00:57:31.262567 kernel: raid6: using avx2x2 recovery algorithm Mar 10 00:57:31.294541 kernel: xor: automatically using best checksumming function avx Mar 10 00:57:31.467533 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 00:57:31.484197 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 00:57:31.502761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 00:57:31.516903 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 10 00:57:31.522987 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 00:57:31.527102 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 00:57:31.567169 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Mar 10 00:57:31.652802 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 00:57:31.674188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 00:57:31.927240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 00:57:31.947833 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 00:57:32.009123 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 00:57:32.014682 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 00:57:32.020489 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 00:57:32.025865 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 00:57:32.047579 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 00:57:32.059591 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 00:57:32.064216 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 00:57:32.068244 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 00:57:32.082079 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 00:57:32.082309 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 00:57:32.082325 kernel: GPT:9289727 != 19775487 Mar 10 00:57:32.082336 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 00:57:32.082346 kernel: GPT:9289727 != 19775487 Mar 10 00:57:32.082356 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 00:57:32.068518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:57:32.086890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 00:57:32.102082 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 00:57:32.111980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 00:57:32.112320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:57:32.117773 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:57:32.152878 kernel: BTRFS: device fsid a7ce059b-f34b-4785-93b9-44632d452486 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (464) Mar 10 00:57:32.161097 kernel: libata version 3.00 loaded. Mar 10 00:57:32.156945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:57:32.166557 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 00:57:32.204515 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (480) Mar 10 00:57:32.204570 kernel: AVX2 version of gcm_enc/dec engaged. Mar 10 00:57:32.216396 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 00:57:32.226406 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 00:57:32.234995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 00:57:32.463889 kernel: AES CTR mode by8 optimization enabled Mar 10 00:57:32.463924 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 00:57:32.464244 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 00:57:32.464264 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 10 00:57:32.464663 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 00:57:32.464929 kernel: scsi host0: ahci Mar 10 00:57:32.465165 kernel: scsi host1: ahci Mar 10 00:57:32.465553 kernel: scsi host2: ahci Mar 10 00:57:32.465845 kernel: scsi host3: ahci Mar 10 00:57:32.466082 kernel: scsi host4: ahci Mar 10 00:57:32.466318 kernel: scsi host5: ahci Mar 10 00:57:32.466693 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 10 00:57:32.466717 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 10 00:57:32.466732 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 10 00:57:32.466747 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 10 00:57:32.466767 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 10 00:57:32.466782 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 10 00:57:32.472194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 00:57:32.473088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:57:32.496135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 00:57:32.514972 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 00:57:32.524733 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 00:57:32.538688 disk-uuid[557]: Primary Header is updated. Mar 10 00:57:32.538688 disk-uuid[557]: Secondary Entries is updated. Mar 10 00:57:32.538688 disk-uuid[557]: Secondary Header is updated. Mar 10 00:57:32.550063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 00:57:32.580490 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 00:57:32.580546 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 00:57:32.585777 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 00:57:32.592522 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 00:57:32.603572 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 00:57:32.609724 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:57:32.626672 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 00:57:32.626707 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 00:57:32.626736 kernel: ata3.00: applying bridge limits Mar 10 00:57:32.643514 kernel: ata3.00: configured for UDMA/100 Mar 10 00:57:32.661551 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 00:57:32.833254 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 00:57:32.833873 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 00:57:32.882153 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 00:57:33.568539 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 00:57:33.570216 disk-uuid[560]: The operation has completed successfully. Mar 10 00:57:33.618598 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 00:57:33.618825 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 00:57:33.646843 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 00:57:33.659808 sh[596]: Success Mar 10 00:57:33.676693 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 10 00:57:33.740378 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 00:57:33.765930 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 00:57:33.773014 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 00:57:33.794721 kernel: BTRFS info (device dm-0): first mount of filesystem a7ce059b-f34b-4785-93b9-44632d452486 Mar 10 00:57:33.794765 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 00:57:33.794785 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 00:57:33.798607 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 00:57:33.801561 kernel: BTRFS info (device dm-0): using free space tree Mar 10 00:57:33.815541 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 00:57:33.816593 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 00:57:33.832788 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 00:57:33.837565 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 00:57:33.862803 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 00:57:33.862846 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 00:57:33.862864 kernel: BTRFS info (device vda6): using free space tree Mar 10 00:57:33.870521 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 00:57:33.884361 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 00:57:33.891146 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 00:57:33.902605 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 00:57:33.916780 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 00:57:33.991024 ignition[698]: Ignition 2.19.0 Mar 10 00:57:33.991059 ignition[698]: Stage: fetch-offline Mar 10 00:57:33.991101 ignition[698]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:57:33.991113 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 00:57:33.991234 ignition[698]: parsed url from cmdline: "" Mar 10 00:57:33.991239 ignition[698]: no config URL provided Mar 10 00:57:33.991245 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 00:57:33.991257 ignition[698]: no config at "/usr/lib/ignition/user.ign" Mar 10 00:57:33.991287 ignition[698]: op(1): [started] loading QEMU firmware config module Mar 10 00:57:33.991292 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 00:57:34.000894 ignition[698]: op(1): [finished] loading QEMU firmware config module Mar 10 00:57:34.057794 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 00:57:34.075693 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 00:57:34.096254 ignition[698]: parsing config with SHA512: 49a4bd6c31455fcae6e647869f0d310812bab63bad5f8852e5cad7083bf0dc747a65d82c173a3de7e781881a2bf42c14100c809f6db674fe0285a873692d5022 Mar 10 00:57:34.104827 unknown[698]: fetched base config from "system" Mar 10 00:57:34.104879 unknown[698]: fetched user config from "qemu" Mar 10 00:57:34.108551 systemd-networkd[784]: lo: Link UP Mar 10 00:57:34.108556 systemd-networkd[784]: lo: Gained carrier Mar 10 00:57:34.114163 ignition[698]: fetch-offline: fetch-offline passed Mar 10 00:57:34.110852 systemd-networkd[784]: Enumeration completed Mar 10 00:57:34.114285 ignition[698]: Ignition finished successfully Mar 10 00:57:34.112133 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:57:34.112138 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 00:57:34.113566 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 00:57:34.113915 systemd-networkd[784]: eth0: Link UP Mar 10 00:57:34.113920 systemd-networkd[784]: eth0: Gained carrier Mar 10 00:57:34.113929 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:57:34.123949 systemd[1]: Reached target network.target - Network. Mar 10 00:57:34.165201 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 00:57:34.165666 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 00:57:34.180517 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 00:57:34.187716 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 00:57:34.207497 ignition[787]: Ignition 2.19.0 Mar 10 00:57:34.207534 ignition[787]: Stage: kargs Mar 10 00:57:34.207798 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:57:34.211066 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 00:57:34.207812 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 00:57:34.208710 ignition[787]: kargs: kargs passed Mar 10 00:57:34.208757 ignition[787]: Ignition finished successfully Mar 10 00:57:34.235688 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 00:57:34.253098 ignition[796]: Ignition 2.19.0 Mar 10 00:57:34.253158 ignition[796]: Stage: disks Mar 10 00:57:34.253653 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:57:34.256946 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 00:57:34.253669 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 00:57:34.261929 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 00:57:34.254862 ignition[796]: disks: disks passed Mar 10 00:57:34.269691 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 00:57:34.254912 ignition[796]: Ignition finished successfully Mar 10 00:57:34.274698 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 00:57:34.279139 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 00:57:34.283597 systemd[1]: Reached target basic.target - Basic System. Mar 10 00:57:34.331915 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 00:57:34.358148 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 10 00:57:34.365327 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 00:57:34.373261 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 00:57:34.516599 kernel: EXT4-fs (vda9): mounted filesystem 8ab7565f-94b4-4514-a19e-abd5bcc78da1 r/w with ordered data mode. Quota mode: none. Mar 10 00:57:34.517820 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 00:57:34.518873 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 00:57:34.542613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 00:57:34.566062 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 10 00:57:34.566092 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 00:57:34.566119 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 00:57:34.566129 kernel: BTRFS info (device vda6): using free space tree Mar 10 00:57:34.547129 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 00:57:34.569390 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 00:57:34.588754 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 00:57:34.569564 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 00:57:34.569605 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 00:57:34.605236 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 00:57:34.612104 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 00:57:34.631726 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 00:57:34.680746 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 00:57:34.693827 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 10 00:57:34.707049 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 00:57:34.721771 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 00:57:34.876880 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 00:57:34.900677 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 00:57:34.909149 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 00:57:34.920161 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 00:57:34.918335 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 00:57:34.946980 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 00:57:34.958843 ignition[927]: INFO : Ignition 2.19.0 Mar 10 00:57:34.958843 ignition[927]: INFO : Stage: mount Mar 10 00:57:34.964780 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 00:57:34.964780 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 00:57:34.964780 ignition[927]: INFO : mount: mount passed Mar 10 00:57:34.964780 ignition[927]: INFO : Ignition finished successfully Mar 10 00:57:34.962385 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 00:57:34.988808 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 00:57:35.002294 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 00:57:35.026514 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 10 00:57:35.033695 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 00:57:35.033733 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 00:57:35.033745 kernel: BTRFS info (device vda6): using free space tree Mar 10 00:57:35.045571 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 00:57:35.047106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 00:57:35.091889 ignition[957]: INFO : Ignition 2.19.0 Mar 10 00:57:35.091889 ignition[957]: INFO : Stage: files Mar 10 00:57:35.098488 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 00:57:35.098488 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 00:57:35.098488 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 10 00:57:35.098488 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 00:57:35.098488 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 00:57:35.124057 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 00:57:35.124057 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 00:57:35.124057 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 00:57:35.124057 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 00:57:35.124057 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 00:57:35.101954 unknown[957]: wrote ssh authorized keys file for user: core Mar 10 00:57:35.181063 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 00:57:35.267705 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 00:57:35.267705 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 00:57:35.289599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 10 00:57:35.628119 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 10 00:57:35.744381 systemd-networkd[784]: eth0: Gained IPv6LL Mar 10 00:57:36.176370 kernel: hrtimer: interrupt took 4511395 ns Mar 10 00:57:38.243405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 00:57:38.243405 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 10 00:57:38.259962 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 00:57:38.269238 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 00:57:38.269238 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 10 00:57:38.269238 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 10 00:57:38.286868 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 00:57:38.298852 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 00:57:38.298852 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 10 00:57:38.312136 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 00:57:38.525515 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 00:57:38.610403 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 00:57:38.619545 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 00:57:38.619545 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 10 00:57:38.634808 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 00:57:38.644076 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 00:57:38.653381 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 00:57:38.661711 ignition[957]: INFO : files: files passed Mar 10 00:57:38.665753 ignition[957]: INFO : Ignition finished successfully Mar 10 00:57:38.675924 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 00:57:38.712048 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 00:57:38.722615 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 00:57:38.740740 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 00:57:38.741187 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 00:57:38.755798 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 00:57:38.767715 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 00:57:38.773783 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 00:57:38.773783 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 00:57:38.774809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 00:57:38.780488 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 00:57:38.818964 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 00:57:38.875144 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 00:57:38.875350 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 00:57:38.885272 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 00:57:38.896795 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 00:57:38.911486 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 00:57:38.926752 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 00:57:38.948132 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 00:57:38.966866 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 00:57:38.979997 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 00:57:38.984330 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 00:57:38.995022 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 00:57:39.004984 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 00:57:39.005167 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 00:57:39.013742 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 00:57:39.019889 systemd[1]: Stopped target basic.target - Basic System. Mar 10 00:57:39.027852 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 00:57:39.035570 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 00:57:39.036090 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 00:57:39.037227 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 00:57:39.038701 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 00:57:39.039076 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 00:57:39.040270 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 00:57:39.041492 systemd[1]: Stopped target swap.target - Swaps. Mar 10 00:57:39.041949 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 00:57:39.042289 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 00:57:39.043130 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 00:57:39.044306 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 00:57:39.181697 ignition[1011]: INFO : Ignition 2.19.0 Mar 10 00:57:39.181697 ignition[1011]: INFO : Stage: umount Mar 10 00:57:39.181697 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 00:57:39.181697 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 00:57:39.181697 ignition[1011]: INFO : umount: umount passed Mar 10 00:57:39.181697 ignition[1011]: INFO : Ignition finished successfully Mar 10 00:57:39.045497 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 00:57:39.045829 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 00:57:39.046040 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 00:57:39.046315 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 00:57:39.048308 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 00:57:39.048612 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 00:57:39.049184 systemd[1]: Stopped target paths.target - Path Units. Mar 10 00:57:39.049675 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 00:57:39.050026 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 00:57:39.051539 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 00:57:39.052039 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 00:57:39.052676 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 00:57:39.052932 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 00:57:39.054392 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 00:57:39.054709 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 00:57:39.055600 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 00:57:39.055900 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 00:57:39.056790 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 00:57:39.056995 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 00:57:39.137144 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 00:57:39.143990 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 00:57:39.144277 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 00:57:39.156827 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 00:57:39.162132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 00:57:39.162337 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 00:57:39.181705 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 00:57:39.181916 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 00:57:39.239161 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 00:57:39.240768 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 00:57:39.240969 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 00:57:39.249074 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 00:57:39.249286 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 00:57:39.256698 systemd[1]: Stopped target network.target - Network. Mar 10 00:57:39.266107 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 00:57:39.266265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 00:57:39.280811 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 00:57:39.280943 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 00:57:39.281704 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 00:57:39.281786 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 00:57:39.282541 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 00:57:39.282621 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 00:57:39.284106 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 00:57:39.284978 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 00:57:39.318052 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 00:57:39.319357 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 00:57:39.322855 systemd-networkd[784]: eth0: DHCPv6 lease lost Mar 10 00:57:39.330014 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 00:57:39.330199 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 00:57:39.336535 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 00:57:39.336940 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 00:57:39.347779 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 00:57:39.347862 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 00:57:39.365774 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 00:57:39.365892 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 00:57:39.399770 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 00:57:39.410869 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 00:57:39.410968 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 00:57:39.418034 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 00:57:39.418132 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 00:57:39.422623 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 00:57:39.422756 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 00:57:39.432775 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 00:57:39.432880 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 00:57:39.441296 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 00:57:39.491225 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 00:57:39.493241 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 00:57:39.526569 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 00:57:39.528523 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 00:57:39.549506 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 00:57:39.549570 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 00:57:39.561702 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 00:57:39.562333 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 00:57:39.578869 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 00:57:39.579006 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 00:57:39.610763 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 00:57:39.611080 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:57:39.671997 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 00:57:39.681071 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 00:57:39.681233 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 00:57:39.691216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 00:57:39.691300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:57:39.696234 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 00:57:39.696485 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 00:57:39.886849 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 10 00:57:39.727518 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 00:57:39.727848 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 00:57:39.742530 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 00:57:39.770164 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 00:57:39.805175 systemd[1]: Switching root. Mar 10 00:57:39.955313 systemd-journald[194]: Journal stopped Mar 10 00:57:43.157851 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 00:57:43.157994 kernel: SELinux: policy capability open_perms=1 Mar 10 00:57:43.158008 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 00:57:43.158020 kernel: SELinux: policy capability always_check_network=0 Mar 10 00:57:43.158031 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 00:57:43.158051 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 00:57:43.158063 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 00:57:43.158077 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 00:57:43.158088 kernel: audit: type=1403 audit(1773104260.324:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 00:57:43.158101 systemd[1]: Successfully loaded SELinux policy in 155.149ms. Mar 10 00:57:43.158122 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 48.275ms. Mar 10 00:57:43.158135 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 00:57:43.158147 systemd[1]: Detected virtualization kvm. Mar 10 00:57:43.158161 systemd[1]: Detected architecture x86-64. Mar 10 00:57:43.158174 systemd[1]: Detected first boot. Mar 10 00:57:43.158185 systemd[1]: Initializing machine ID from VM UUID. Mar 10 00:57:43.158295 zram_generator::config[1059]: No configuration found. Mar 10 00:57:43.158311 systemd[1]: Populated /etc with preset unit settings. Mar 10 00:57:43.158323 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 00:57:43.158334 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 00:57:43.158346 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 00:57:43.158363 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 00:57:43.158375 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 00:57:43.158387 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 00:57:43.158399 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 00:57:43.158466 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 00:57:43.158480 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 00:57:43.158499 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 00:57:43.158550 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 00:57:43.158572 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 00:57:43.158597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 00:57:43.158618 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 00:57:43.158700 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 00:57:43.158727 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 00:57:43.158749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 00:57:43.158770 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 00:57:43.159399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 00:57:43.159473 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 00:57:43.159487 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 00:57:43.159504 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 00:57:43.159517 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 00:57:43.159529 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 00:57:43.159541 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 00:57:43.159552 systemd[1]: Reached target slices.target - Slice Units. Mar 10 00:57:43.159571 systemd[1]: Reached target swap.target - Swaps. Mar 10 00:57:43.159591 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 00:57:43.159613 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 00:57:43.159708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 00:57:43.159724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 00:57:43.159902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 00:57:43.159915 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 00:57:43.159928 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 00:57:43.159939 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 00:57:43.159952 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 00:57:43.159964 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 00:57:43.160027 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 00:57:43.160044 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 00:57:43.160056 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 00:57:43.160068 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 00:57:43.160080 systemd[1]: Reached target machines.target - Containers. Mar 10 00:57:43.160091 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 00:57:43.160103 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 00:57:43.160116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 00:57:43.160127 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 00:57:43.160142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 00:57:43.160154 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 00:57:43.160165 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 00:57:43.160203 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 00:57:43.160215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 00:57:43.160227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 00:57:43.160264 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 00:57:43.160277 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 00:57:43.160293 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 00:57:43.160305 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 00:57:43.160316 kernel: fuse: init (API version 7.39) Mar 10 00:57:43.160328 kernel: ACPI: bus type drm_connector registered Mar 10 00:57:43.160339 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 00:57:43.160351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 00:57:43.160362 kernel: loop: module loaded Mar 10 00:57:43.160373 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 00:57:43.160385 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 00:57:43.160396 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 00:57:43.160503 systemd-journald[1143]: Collecting audit messages is disabled. Mar 10 00:57:43.160562 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 00:57:43.160576 systemd-journald[1143]: Journal started Mar 10 00:57:43.160595 systemd-journald[1143]: Runtime Journal (/run/log/journal/b7ff6b560a134be5ba76d64455c6e4ae) is 6.0M, max 48.4M, 42.3M free. Mar 10 00:57:42.379089 systemd[1]: Queued start job for default target multi-user.target. Mar 10 00:57:42.409614 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 00:57:42.410368 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 00:57:42.411011 systemd[1]: systemd-journald.service: Consumed 2.343s CPU time. Mar 10 00:57:43.167797 systemd[1]: Stopped verity-setup.service. Mar 10 00:57:43.180515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 00:57:43.186531 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 00:57:43.193375 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 00:57:43.199023 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 00:57:43.203717 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 00:57:43.207908 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 00:57:43.212101 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 00:57:43.216227 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 00:57:43.220201 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 00:57:43.227970 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 00:57:43.233335 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 00:57:43.233822 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 00:57:43.241072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 00:57:43.241594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 00:57:43.249268 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 00:57:43.249754 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 00:57:43.255994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 00:57:43.256340 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 00:57:43.261785 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 00:57:43.262193 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 00:57:43.275204 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 00:57:43.275567 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 00:57:43.282821 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 00:57:43.296205 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 00:57:43.304784 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 00:57:43.333046 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 00:57:43.348593 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 00:57:43.358145 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 00:57:43.362591 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 00:57:43.362755 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 00:57:43.369606 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 10 00:57:43.394815 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 00:57:43.402194 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 00:57:43.406833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 00:57:43.409698 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 00:57:43.418588 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 00:57:43.425171 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 00:57:43.436988 systemd-journald[1143]: Time spent on flushing to /var/log/journal/b7ff6b560a134be5ba76d64455c6e4ae is 289.909ms for 936 entries. Mar 10 00:57:43.436988 systemd-journald[1143]: System Journal (/var/log/journal/b7ff6b560a134be5ba76d64455c6e4ae) is 8.0M, max 195.6M, 187.6M free. Mar 10 00:57:43.767189 systemd-journald[1143]: Received client request to flush runtime journal. Mar 10 00:57:43.767299 kernel: loop0: detected capacity change from 0 to 140768 Mar 10 00:57:43.436930 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 00:57:43.445974 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 00:57:43.452692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 00:57:43.472751 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 00:57:43.743912 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 00:57:43.752836 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 00:57:43.758910 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 00:57:43.771332 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 00:57:43.778999 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 00:57:43.789931 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 00:57:43.802336 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 00:57:43.828667 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 00:57:43.843007 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 00:57:43.855268 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 10 00:57:43.873909 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 10 00:57:43.900500 kernel: loop1: detected capacity change from 0 to 219192 Mar 10 00:57:43.902358 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 00:57:43.919029 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 00:57:43.923229 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 10 00:57:44.253158 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 10 00:57:44.280516 kernel: loop2: detected capacity change from 0 to 142488 Mar 10 00:57:44.306359 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 00:57:44.328807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 00:57:44.566583 kernel: loop3: detected capacity change from 0 to 140768 Mar 10 00:57:44.625761 kernel: loop4: detected capacity change from 0 to 219192 Mar 10 00:57:44.630290 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 10 00:57:44.630336 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 10 00:57:44.650564 kernel: loop5: detected capacity change from 0 to 142488 Mar 10 00:57:44.654088 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 00:57:44.863947 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 00:57:44.895736 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 10 00:57:44.953698 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 00:57:44.953718 systemd[1]: Reloading... Mar 10 00:57:45.080529 zram_generator::config[1223]: No configuration found. Mar 10 00:57:45.780240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 00:57:45.813738 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 00:57:45.835249 systemd[1]: Reloading finished in 880 ms. Mar 10 00:57:45.877385 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 00:57:45.969773 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 00:57:46.026851 systemd[1]: Starting ensure-sysext.service... Mar 10 00:57:46.031580 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 00:57:46.059759 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 10 00:57:46.059779 systemd[1]: Reloading... Mar 10 00:57:46.209540 zram_generator::config[1287]: No configuration found. Mar 10 00:57:46.223155 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 00:57:46.224502 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 00:57:46.226029 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 00:57:46.226523 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 10 00:57:46.226693 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 10 00:57:46.231204 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 00:57:46.231245 systemd-tmpfiles[1261]: Skipping /boot Mar 10 00:57:46.246279 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 00:57:46.246330 systemd-tmpfiles[1261]: Skipping /boot Mar 10 00:57:46.361621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 00:57:46.411730 systemd[1]: Reloading finished in 351 ms. Mar 10 00:57:46.435316 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 00:57:46.453366 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 00:57:46.469928 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 00:57:46.475949 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 00:57:46.481955 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 00:57:46.491107 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 00:57:46.505364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 00:57:46.512955 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 00:57:46.522187 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 00:57:46.522713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 00:57:46.532117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 00:57:46.538879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 00:57:46.551917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 00:57:46.555957 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 00:57:46.559949 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 00:57:46.562323 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Mar 10 00:57:46.563792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 00:57:46.565224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 00:57:46.565530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 00:57:46.571332 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 00:57:46.576626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 00:57:46.576939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 00:57:46.582890 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 00:57:46.583124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 00:57:46.594791 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 00:57:46.605360 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 00:57:46.605795 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 00:57:46.609542 augenrules[1356]: No rules Mar 10 00:57:46.613802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 00:57:46.624891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 00:57:46.633512 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 00:57:46.637771 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 00:57:46.640560 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 00:57:46.647632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 00:57:46.648702 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 00:57:46.657769 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 00:57:46.664793 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 00:57:46.671569 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 00:57:46.679809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 00:57:46.680162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 00:57:46.686169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 00:57:46.686575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 00:57:46.694351 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 00:57:46.694953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 00:57:46.700232 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 00:57:46.722888 systemd[1]: Finished ensure-sysext.service. Mar 10 00:57:46.732119 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 00:57:46.732268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 00:57:46.732526 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 00:57:46.830544 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1388) Mar 10 00:57:46.842401 systemd-resolved[1330]: Positive Trust Anchors: Mar 10 00:57:46.842518 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 00:57:46.842546 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 00:57:46.860886 systemd-resolved[1330]: Defaulting to hostname 'linux'. Mar 10 00:57:46.868833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 00:57:46.882794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 00:57:46.892789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 00:57:46.903938 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 00:57:46.909507 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 00:57:46.912872 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 00:57:46.921918 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 00:57:46.926782 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 00:57:46.926816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 00:57:46.927218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 00:57:46.932469 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 10 00:57:46.935016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 00:57:46.935268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 00:57:46.940233 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 00:57:46.940556 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 00:57:46.946505 kernel: ACPI: button: Power Button [PWRF] Mar 10 00:57:46.948293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 00:57:46.948605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 00:57:46.954390 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 00:57:46.954746 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 00:57:46.988712 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 00:57:47.308312 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 10 00:57:47.321276 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 00:57:47.361474 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 10 00:57:47.365951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 00:57:47.373316 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 00:57:47.373500 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 00:57:47.377618 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 00:57:47.551900 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 00:57:47.566791 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:57:47.585327 systemd-networkd[1407]: lo: Link UP Mar 10 00:57:47.585360 systemd-networkd[1407]: lo: Gained carrier Mar 10 00:57:47.587987 systemd-networkd[1407]: Enumeration completed Mar 10 00:57:47.590787 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 00:57:47.590990 systemd[1]: Reached target network.target - Network. Mar 10 00:57:47.603761 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 00:57:47.605204 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:57:47.605265 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 00:57:47.606916 systemd-networkd[1407]: eth0: Link UP Mar 10 00:57:47.606949 systemd-networkd[1407]: eth0: Gained carrier Mar 10 00:57:47.606963 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:57:47.615930 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 00:57:47.628148 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 00:57:47.632765 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 00:57:47.636624 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 00:57:47.638151 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 10 00:57:48.782026 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 00:57:48.776980 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 00:57:48.777060 systemd-timesyncd[1408]: Initial clock synchronization to Tue 2026-03-10 00:57:48.776788 UTC. Mar 10 00:57:48.779162 systemd-resolved[1330]: Clock change detected. Flushing caches. Mar 10 00:57:49.085813 kernel: kvm_amd: TSC scaling supported Mar 10 00:57:49.085988 kernel: kvm_amd: Nested Virtualization enabled Mar 10 00:57:49.086009 kernel: kvm_amd: Nested Paging enabled Mar 10 00:57:49.087541 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 00:57:49.089624 kernel: kvm_amd: PMU virtualization is disabled Mar 10 00:57:49.172385 kernel: EDAC MC: Ver: 3.0.0 Mar 10 00:57:49.258366 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 10 00:57:49.438218 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 10 00:57:49.443154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:57:49.484162 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 00:57:49.526557 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 10 00:57:49.534536 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 00:57:49.542028 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 00:57:49.546396 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 00:57:49.551248 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 00:57:49.556468 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 00:57:49.561006 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 00:57:49.565995 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 00:57:49.570994 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 00:57:49.571086 systemd[1]: Reached target paths.target - Path Units. Mar 10 00:57:49.574947 systemd[1]: Reached target timers.target - Timer Units. Mar 10 00:57:49.579777 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 00:57:49.586850 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 00:57:49.601271 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 00:57:49.607874 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 10 00:57:49.613452 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 00:57:49.617502 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 00:57:49.621185 systemd[1]: Reached target basic.target - Basic System. Mar 10 00:57:49.624860 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 00:57:49.625039 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 00:57:49.627788 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 00:57:49.631870 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 00:57:49.634088 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 00:57:49.641951 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 00:57:49.653039 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 00:57:49.656319 jq[1438]: false Mar 10 00:57:49.657638 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 00:57:49.663021 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 00:57:49.669188 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 00:57:49.674429 dbus-daemon[1437]: [system] SELinux support is enabled Mar 10 00:57:49.675969 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 00:57:49.683987 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 00:57:49.689493 extend-filesystems[1439]: Found loop3 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found loop4 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found loop5 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found sr0 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found vda Mar 10 00:57:49.693842 extend-filesystems[1439]: Found vda1 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found vda2 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found vda3 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found usr Mar 10 00:57:49.693842 extend-filesystems[1439]: Found vda4 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found vda6 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found vda7 Mar 10 00:57:49.693842 extend-filesystems[1439]: Found vda9 Mar 10 00:57:49.693842 extend-filesystems[1439]: Checking size of /dev/vda9 Mar 10 00:57:49.805252 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 00:57:49.805300 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 00:57:49.805392 extend-filesystems[1439]: Resized partition /dev/vda9 Mar 10 00:57:49.818854 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1387) Mar 10 00:57:49.722059 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 00:57:49.819153 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Mar 10 00:57:49.819153 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 00:57:49.819153 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 00:57:49.819153 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 00:57:49.733402 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 00:57:49.838175 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Mar 10 00:57:49.734149 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 00:57:49.736270 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 00:57:49.849459 jq[1460]: true Mar 10 00:57:49.741832 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 00:57:49.849938 tar[1462]: linux-amd64/LICENSE Mar 10 00:57:49.849938 tar[1462]: linux-amd64/helm Mar 10 00:57:49.743157 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 00:57:49.745936 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 10 00:57:49.760583 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 00:57:49.760958 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 00:57:49.761366 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 00:57:49.761624 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 00:57:49.765449 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 00:57:49.765730 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 00:57:49.782547 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 00:57:49.782585 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 00:57:49.787426 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 00:57:49.787449 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 00:57:49.805322 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 00:57:49.805753 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 00:57:49.845571 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 00:57:49.848118 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Mar 10 00:57:49.848143 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 00:57:49.848760 systemd-logind[1455]: New seat seat0. Mar 10 00:57:49.854310 jq[1471]: true Mar 10 00:57:49.855867 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 00:57:49.859773 update_engine[1459]: I20260310 00:57:49.858939 1459 main.cc:92] Flatcar Update Engine starting Mar 10 00:57:49.863113 update_engine[1459]: I20260310 00:57:49.863039 1459 update_check_scheduler.cc:74] Next update check in 2m5s Mar 10 00:57:49.875507 systemd[1]: Started update-engine.service - Update Engine. Mar 10 00:57:49.882640 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 00:57:49.925749 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Mar 10 00:57:49.928195 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 00:57:49.930550 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 00:57:49.936258 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 00:57:49.959968 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 00:57:49.966591 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 00:57:49.980173 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 00:57:49.994132 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 00:57:49.994488 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 00:57:50.013217 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 00:57:50.032470 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 00:57:50.045352 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 00:57:50.057252 systemd-networkd[1407]: eth0: Gained IPv6LL Mar 10 00:57:50.062127 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 00:57:50.066158 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 00:57:50.070437 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 00:57:50.077038 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 00:57:50.091211 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 00:57:50.097006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:57:50.100585 containerd[1472]: time="2026-03-10T00:57:50.100518005Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 10 00:57:50.105138 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 00:57:50.131129 containerd[1472]: time="2026-03-10T00:57:50.130994605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 10 00:57:50.139306 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 00:57:50.139533 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 00:57:50.144492 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.145412068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.145440811Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.145456511Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.145621949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.145637348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.145785014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.145800453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.146022978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.146038507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.146051802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:57:50.146779 containerd[1472]: time="2026-03-10T00:57:50.146060859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 10 00:57:50.147035 containerd[1472]: time="2026-03-10T00:57:50.146155305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 10 00:57:50.147035 containerd[1472]: time="2026-03-10T00:57:50.146392207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 10 00:57:50.147035 containerd[1472]: time="2026-03-10T00:57:50.146499107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:57:50.147035 containerd[1472]: time="2026-03-10T00:57:50.146512191Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 10 00:57:50.147035 containerd[1472]: time="2026-03-10T00:57:50.146605697Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 10 00:57:50.147035 containerd[1472]: time="2026-03-10T00:57:50.146742181Z" level=info msg="metadata content store policy set" policy=shared Mar 10 00:57:50.153094 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 00:57:50.161126 containerd[1472]: time="2026-03-10T00:57:50.161104030Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 10 00:57:50.161467 containerd[1472]: time="2026-03-10T00:57:50.161240084Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 10 00:57:50.161536 containerd[1472]: time="2026-03-10T00:57:50.161520297Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 10 00:57:50.161629 containerd[1472]: time="2026-03-10T00:57:50.161614323Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 10 00:57:50.161840 containerd[1472]: time="2026-03-10T00:57:50.161823042Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 10 00:57:50.162131 containerd[1472]: time="2026-03-10T00:57:50.162111521Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 10 00:57:50.163014 containerd[1472]: time="2026-03-10T00:57:50.162993728Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 10 00:57:50.163513 containerd[1472]: time="2026-03-10T00:57:50.163437036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 10 00:57:50.163609 containerd[1472]: time="2026-03-10T00:57:50.163593619Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 10 00:57:50.163802 containerd[1472]: time="2026-03-10T00:57:50.163785767Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 10 00:57:50.163985 containerd[1472]: time="2026-03-10T00:57:50.163969491Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 10 00:57:50.164150 containerd[1472]: time="2026-03-10T00:57:50.164133277Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 10 00:57:50.164296 containerd[1472]: time="2026-03-10T00:57:50.164282185Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 10 00:57:50.164439 containerd[1472]: time="2026-03-10T00:57:50.164424320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 10 00:57:50.164494 containerd[1472]: time="2026-03-10T00:57:50.164481617Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 10 00:57:50.164610 containerd[1472]: time="2026-03-10T00:57:50.164595830Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 10 00:57:50.164735 containerd[1472]: time="2026-03-10T00:57:50.164718479Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.164788029Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.164944371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.164961573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.164974527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.164985357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165001037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165017097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165028789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165039979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165052403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165065928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165076678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165087869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165099380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165196 containerd[1472]: time="2026-03-10T00:57:50.165113727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 10 00:57:50.165491 containerd[1472]: time="2026-03-10T00:57:50.165131680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165491 containerd[1472]: time="2026-03-10T00:57:50.165142050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.165491 containerd[1472]: time="2026-03-10T00:57:50.165151528Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 10 00:57:50.166035 containerd[1472]: time="2026-03-10T00:57:50.165779701Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 10 00:57:50.166035 containerd[1472]: time="2026-03-10T00:57:50.165806431Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 10 00:57:50.166035 containerd[1472]: time="2026-03-10T00:57:50.165818373Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 10 00:57:50.166035 containerd[1472]: time="2026-03-10T00:57:50.165830566Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 10 00:57:50.166035 containerd[1472]: time="2026-03-10T00:57:50.165839753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.166035 containerd[1472]: time="2026-03-10T00:57:50.165851134Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 10 00:57:50.166035 containerd[1472]: time="2026-03-10T00:57:50.165866503Z" level=info msg="NRI interface is disabled by configuration." Mar 10 00:57:50.166035 containerd[1472]: time="2026-03-10T00:57:50.165877143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 10 00:57:50.166432 containerd[1472]: time="2026-03-10T00:57:50.166379050Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 10 00:57:50.167409 containerd[1472]: time="2026-03-10T00:57:50.166988738Z" level=info msg="Connect containerd service" Mar 10 00:57:50.167409 containerd[1472]: time="2026-03-10T00:57:50.167044352Z" level=info msg="using legacy CRI server" Mar 10 00:57:50.167409 containerd[1472]: time="2026-03-10T00:57:50.167058258Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 00:57:50.167409 containerd[1472]: time="2026-03-10T00:57:50.167153306Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 10 00:57:50.168581 containerd[1472]: time="2026-03-10T00:57:50.168050511Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 00:57:50.168581 containerd[1472]: time="2026-03-10T00:57:50.168260213Z" level=info msg="Start subscribing containerd event" Mar 10 00:57:50.168581 containerd[1472]: time="2026-03-10T00:57:50.168395686Z" level=info msg="Start recovering state" Mar 10 00:57:50.168581 containerd[1472]: time="2026-03-10T00:57:50.168458403Z" level=info msg="Start event monitor" Mar 10 00:57:50.168581 containerd[1472]: time="2026-03-10T00:57:50.168469093Z" level=info msg="Start snapshots syncer" Mar 10 00:57:50.168581 containerd[1472]: time="2026-03-10T00:57:50.168478180Z" level=info msg="Start cni network conf syncer for default" Mar 10 00:57:50.168581 containerd[1472]: time="2026-03-10T00:57:50.168485183Z" level=info msg="Start streaming server" Mar 10 00:57:50.169736 containerd[1472]: time="2026-03-10T00:57:50.169641502Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 00:57:50.169853 containerd[1472]: time="2026-03-10T00:57:50.169834773Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 00:57:50.170108 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 00:57:50.170410 containerd[1472]: time="2026-03-10T00:57:50.170311543Z" level=info msg="containerd successfully booted in 0.071867s" Mar 10 00:57:51.191098 tar[1462]: linux-amd64/README.md Mar 10 00:57:51.215434 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 00:57:54.736089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:57:54.748207 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 00:57:54.755269 systemd[1]: Startup finished in 2.529s (kernel) + 10.383s (initrd) + 13.444s (userspace) = 26.357s. Mar 10 00:57:54.756612 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:57:59.171010 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 00:57:59.186598 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:38026.service - OpenSSH per-connection server daemon (10.0.0.1:38026). Mar 10 00:58:00.047069 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 38026 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 00:58:00.218283 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:58:00.294859 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 00:58:00.309620 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 00:58:00.325370 systemd-logind[1455]: New session 1 of user core. Mar 10 00:58:00.625444 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 00:58:00.660387 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 00:58:00.681403 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 00:58:01.667595 systemd[1567]: Queued start job for default target default.target. Mar 10 00:58:01.684474 systemd[1567]: Created slice app.slice - User Application Slice. Mar 10 00:58:01.684509 systemd[1567]: Reached target paths.target - Paths. Mar 10 00:58:01.684531 systemd[1567]: Reached target timers.target - Timers. Mar 10 00:58:01.691464 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 00:58:01.887526 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 00:58:01.898493 systemd[1567]: Reached target sockets.target - Sockets. Mar 10 00:58:01.898554 systemd[1567]: Reached target basic.target - Basic System. Mar 10 00:58:01.901034 systemd[1567]: Reached target default.target - Main User Target. Mar 10 00:58:01.901117 systemd[1567]: Startup finished in 954ms. Mar 10 00:58:01.901205 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 00:58:01.954452 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 00:58:02.384511 kubelet[1550]: E0310 00:58:02.381507 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:58:02.408114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:58:02.408409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:58:02.412331 systemd[1]: kubelet.service: Consumed 11.956s CPU time. Mar 10 00:58:02.479532 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:58388.service - OpenSSH per-connection server daemon (10.0.0.1:58388). Mar 10 00:58:03.569340 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 58388 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 00:58:03.664612 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:58:03.743374 systemd-logind[1455]: New session 2 of user core. Mar 10 00:58:03.774492 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 00:58:03.968442 sshd[1579]: pam_unix(sshd:session): session closed for user core Mar 10 00:58:03.986486 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:58388.service: Deactivated successfully. Mar 10 00:58:03.994161 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 00:58:03.998107 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Mar 10 00:58:04.018031 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:58396.service - OpenSSH per-connection server daemon (10.0.0.1:58396). Mar 10 00:58:04.022554 systemd-logind[1455]: Removed session 2. Mar 10 00:58:04.086556 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 58396 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 00:58:04.095389 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:58:04.159211 systemd-logind[1455]: New session 3 of user core. Mar 10 00:58:04.185164 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 00:58:04.366854 sshd[1586]: pam_unix(sshd:session): session closed for user core Mar 10 00:58:04.392138 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:58396.service: Deactivated successfully. Mar 10 00:58:04.427335 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 00:58:04.436066 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Mar 10 00:58:04.453486 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:58412.service - OpenSSH per-connection server daemon (10.0.0.1:58412). Mar 10 00:58:04.468108 systemd-logind[1455]: Removed session 3. Mar 10 00:58:04.556372 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 58412 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 00:58:04.562893 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:58:04.588226 systemd-logind[1455]: New session 4 of user core. Mar 10 00:58:04.603634 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 00:58:04.798074 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 10 00:58:04.841439 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:58412.service: Deactivated successfully. Mar 10 00:58:04.846227 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 00:58:04.849065 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Mar 10 00:58:04.873070 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:58420.service - OpenSSH per-connection server daemon (10.0.0.1:58420). Mar 10 00:58:04.882073 systemd-logind[1455]: Removed session 4. Mar 10 00:58:04.963512 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 58420 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 00:58:04.967312 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:58:05.020248 systemd-logind[1455]: New session 5 of user core. Mar 10 00:58:05.044020 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 00:58:05.164346 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 00:58:05.165246 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 00:58:13.194616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 00:58:13.890537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:58:20.048106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:58:20.080486 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:58:22.407275 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 00:58:22.677341 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 00:58:25.101175 kubelet[1627]: E0310 00:58:25.099580 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:58:25.172395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:58:25.181291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:58:25.192439 systemd[1]: kubelet.service: Consumed 11.347s CPU time. Mar 10 00:58:35.397143 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 00:58:35.471372 update_engine[1459]: I20260310 00:58:35.460289 1459 update_attempter.cc:509] Updating boot flags... Mar 10 00:58:35.499355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:58:36.269199 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1660) Mar 10 00:58:36.409422 dockerd[1637]: time="2026-03-10T00:58:36.407077934Z" level=info msg="Starting up" Mar 10 00:58:38.632543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:58:38.666342 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:58:39.179587 systemd[1]: var-lib-docker-metacopy\x2dcheck3222803032-merged.mount: Deactivated successfully. Mar 10 00:58:39.660919 dockerd[1637]: time="2026-03-10T00:58:39.655135887Z" level=info msg="Loading containers: start." Mar 10 00:58:40.019235 kubelet[1684]: E0310 00:58:40.018344 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:58:40.026480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:58:40.028352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:58:40.029330 systemd[1]: kubelet.service: Consumed 4.229s CPU time. Mar 10 00:58:41.015590 kernel: Initializing XFRM netlink socket Mar 10 00:58:42.108614 systemd-networkd[1407]: docker0: Link UP Mar 10 00:58:42.285446 dockerd[1637]: time="2026-03-10T00:58:42.283460870Z" level=info msg="Loading containers: done." Mar 10 00:58:42.703163 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck909237626-merged.mount: Deactivated successfully. Mar 10 00:58:42.788151 dockerd[1637]: time="2026-03-10T00:58:42.787380649Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 00:58:42.796328 dockerd[1637]: time="2026-03-10T00:58:42.795422697Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 10 00:58:42.799077 dockerd[1637]: time="2026-03-10T00:58:42.798313536Z" level=info msg="Daemon has completed initialization" Mar 10 00:58:43.371121 dockerd[1637]: time="2026-03-10T00:58:43.367531402Z" level=info msg="API listen on /run/docker.sock" Mar 10 00:58:43.375008 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 00:58:49.492634 containerd[1472]: time="2026-03-10T00:58:49.491634405Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 10 00:58:50.182164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 00:58:50.261301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:58:52.676551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332516525.mount: Deactivated successfully. Mar 10 00:58:52.760291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:58:52.767085 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:58:54.406604 kubelet[1832]: E0310 00:58:54.405963 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:58:54.422543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:58:54.425302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:58:54.426572 systemd[1]: kubelet.service: Consumed 3.796s CPU time. Mar 10 00:59:04.647517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 10 00:59:04.672437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:59:06.303082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:59:06.371517 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:59:07.852286 containerd[1472]: time="2026-03-10T00:59:07.850017363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:07.855453 containerd[1472]: time="2026-03-10T00:59:07.854636021Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 10 00:59:07.860200 containerd[1472]: time="2026-03-10T00:59:07.859583303Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:07.879022 containerd[1472]: time="2026-03-10T00:59:07.878526145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:07.883296 containerd[1472]: time="2026-03-10T00:59:07.881467996Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 18.389092209s" Mar 10 00:59:07.883296 containerd[1472]: time="2026-03-10T00:59:07.881516947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 10 00:59:07.892121 containerd[1472]: time="2026-03-10T00:59:07.892072998Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 10 00:59:08.095626 kubelet[1902]: E0310 00:59:08.094873 1902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:59:08.102561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:59:08.103234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:59:08.104424 systemd[1]: kubelet.service: Consumed 3.444s CPU time. Mar 10 00:59:18.151222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 10 00:59:18.211435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:59:18.811303 containerd[1472]: time="2026-03-10T00:59:18.810518343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:18.835297 containerd[1472]: time="2026-03-10T00:59:18.834129636Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 10 00:59:18.840031 containerd[1472]: time="2026-03-10T00:59:18.839926830Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:18.857527 containerd[1472]: time="2026-03-10T00:59:18.857444961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:18.862365 containerd[1472]: time="2026-03-10T00:59:18.861055186Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 10.968582692s" Mar 10 00:59:18.862365 containerd[1472]: time="2026-03-10T00:59:18.861109467Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 10 00:59:18.872435 containerd[1472]: time="2026-03-10T00:59:18.871367681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 10 00:59:19.244024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:59:19.286350 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:59:20.341399 kubelet[1923]: E0310 00:59:20.339519 1923 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:59:20.347544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:59:20.348125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:59:20.352105 systemd[1]: kubelet.service: Consumed 2.075s CPU time. Mar 10 00:59:24.605464 containerd[1472]: time="2026-03-10T00:59:24.601526463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:24.609485 containerd[1472]: time="2026-03-10T00:59:24.609215316Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 10 00:59:24.617008 containerd[1472]: time="2026-03-10T00:59:24.616288794Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:24.632592 containerd[1472]: time="2026-03-10T00:59:24.632058044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:24.653012 containerd[1472]: time="2026-03-10T00:59:24.649571350Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 5.777007887s" Mar 10 00:59:24.653012 containerd[1472]: time="2026-03-10T00:59:24.649929058Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 10 00:59:24.659491 containerd[1472]: time="2026-03-10T00:59:24.659256176Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 10 00:59:30.192590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2595346619.mount: Deactivated successfully. Mar 10 00:59:30.417434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 10 00:59:30.438950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:59:31.297640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:59:31.303451 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:59:32.955381 kubelet[1952]: E0310 00:59:32.954604 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:59:32.985388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:59:32.986129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:59:33.005375 systemd[1]: kubelet.service: Consumed 2.560s CPU time. Mar 10 00:59:35.710645 containerd[1472]: time="2026-03-10T00:59:35.710010403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:35.725639 containerd[1472]: time="2026-03-10T00:59:35.723455553Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 10 00:59:35.767448 containerd[1472]: time="2026-03-10T00:59:35.765304877Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:35.913450 containerd[1472]: time="2026-03-10T00:59:35.912232136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:35.965372 containerd[1472]: time="2026-03-10T00:59:35.961136623Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 11.301337163s" Mar 10 00:59:35.965372 containerd[1472]: time="2026-03-10T00:59:35.962200872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 10 00:59:35.991477 containerd[1472]: time="2026-03-10T00:59:35.987466627Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 10 00:59:38.586331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3136063095.mount: Deactivated successfully. Mar 10 00:59:43.183394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 10 00:59:43.264473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:59:45.088315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:59:45.222113 (kubelet)[2021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:59:46.350062 kubelet[2021]: E0310 00:59:46.348632 2021 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:59:46.374384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:59:46.376522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:59:46.391500 systemd[1]: kubelet.service: Consumed 2.711s CPU time. Mar 10 00:59:54.985529 containerd[1472]: time="2026-03-10T00:59:54.983242449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:54.999374 containerd[1472]: time="2026-03-10T00:59:54.998464206Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 10 00:59:55.008119 containerd[1472]: time="2026-03-10T00:59:55.007997802Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:55.105227 containerd[1472]: time="2026-03-10T00:59:55.104248306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:55.128457 containerd[1472]: time="2026-03-10T00:59:55.122410050Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 19.134704526s" Mar 10 00:59:55.128457 containerd[1472]: time="2026-03-10T00:59:55.124469475Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 10 00:59:55.213260 containerd[1472]: time="2026-03-10T00:59:55.211433150Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 00:59:55.479071 update_engine[1459]: I20260310 00:59:55.477303 1459 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 10 00:59:55.479071 update_engine[1459]: I20260310 00:59:55.477387 1459 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 10 00:59:55.479071 update_engine[1459]: I20260310 00:59:55.478127 1459 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 10 00:59:55.481245 update_engine[1459]: I20260310 00:59:55.481220 1459 omaha_request_params.cc:62] Current group set to lts Mar 10 00:59:55.482308 update_engine[1459]: I20260310 00:59:55.482277 1459 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 10 00:59:55.483046 update_engine[1459]: I20260310 00:59:55.483014 1459 update_attempter.cc:643] Scheduling an action processor start. Mar 10 00:59:55.483245 update_engine[1459]: I20260310 00:59:55.483125 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 10 00:59:55.484238 update_engine[1459]: I20260310 00:59:55.483585 1459 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 10 00:59:55.489125 update_engine[1459]: I20260310 00:59:55.484357 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 10 00:59:55.489125 update_engine[1459]: I20260310 00:59:55.484380 1459 omaha_request_action.cc:272] Request: Mar 10 00:59:55.489125 update_engine[1459]: Mar 10 00:59:55.489125 update_engine[1459]: Mar 10 00:59:55.489125 update_engine[1459]: Mar 10 00:59:55.489125 update_engine[1459]: Mar 10 00:59:55.489125 update_engine[1459]: Mar 10 00:59:55.489125 update_engine[1459]: Mar 10 00:59:55.489125 update_engine[1459]: Mar 10 00:59:55.489125 update_engine[1459]: Mar 10 00:59:55.489125 update_engine[1459]: I20260310 00:59:55.484390 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 00:59:55.490176 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 10 00:59:55.497516 update_engine[1459]: I20260310 00:59:55.497350 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 00:59:55.499289 update_engine[1459]: I20260310 00:59:55.498966 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 00:59:55.517084 update_engine[1459]: E20260310 00:59:55.516368 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 00:59:55.517084 update_engine[1459]: I20260310 00:59:55.516556 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 10 00:59:56.426291 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 10 00:59:56.478316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:59:56.792375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248455042.mount: Deactivated successfully. Mar 10 00:59:56.813267 containerd[1472]: time="2026-03-10T00:59:56.813045233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:56.815930 containerd[1472]: time="2026-03-10T00:59:56.815881817Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 10 00:59:56.822447 containerd[1472]: time="2026-03-10T00:59:56.822392647Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:56.854545 containerd[1472]: time="2026-03-10T00:59:56.854123258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:59:56.857955 containerd[1472]: time="2026-03-10T00:59:56.856369787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.64426891s" Mar 10 00:59:56.857955 containerd[1472]: time="2026-03-10T00:59:56.856429469Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 10 00:59:56.864310 containerd[1472]: time="2026-03-10T00:59:56.864162781Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 10 00:59:58.298269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:59:58.323467 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:59:59.738641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994639111.mount: Deactivated successfully. Mar 10 01:00:00.013160 kubelet[2041]: E0310 01:00:00.010558 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:00:00.020578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:00:00.021167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:00:00.022566 systemd[1]: kubelet.service: Consumed 3.661s CPU time. Mar 10 01:00:05.472635 update_engine[1459]: I20260310 01:00:05.470299 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:00:05.482175 update_engine[1459]: I20260310 01:00:05.474291 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:00:05.482175 update_engine[1459]: I20260310 01:00:05.475145 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:00:05.509324 update_engine[1459]: E20260310 01:00:05.509152 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:00:05.509324 update_engine[1459]: I20260310 01:00:05.509267 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 10 01:00:10.162266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 10 01:00:10.202081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:00:11.549963 containerd[1472]: time="2026-03-10T01:00:11.549515018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:00:11.558141 containerd[1472]: time="2026-03-10T01:00:11.554165905Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 10 01:00:11.573265 containerd[1472]: time="2026-03-10T01:00:11.570016316Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:00:11.592459 containerd[1472]: time="2026-03-10T01:00:11.592390058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:00:11.597252 containerd[1472]: time="2026-03-10T01:00:11.594457192Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 14.730158497s" Mar 10 01:00:11.597252 containerd[1472]: time="2026-03-10T01:00:11.594498319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 10 01:00:11.962521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:00:11.970411 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:00:12.730631 kubelet[2129]: E0310 01:00:12.727317 2129 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:00:12.782974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:00:12.783360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:00:12.787243 systemd[1]: kubelet.service: Consumed 2.458s CPU time. Mar 10 01:00:15.402471 update_engine[1459]: I20260310 01:00:15.401516 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:00:15.402471 update_engine[1459]: I20260310 01:00:15.402549 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:00:15.405183 update_engine[1459]: I20260310 01:00:15.403213 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:00:15.426058 update_engine[1459]: E20260310 01:00:15.425446 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:00:15.426058 update_engine[1459]: I20260310 01:00:15.425645 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 10 01:00:21.570292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:00:21.571292 systemd[1]: kubelet.service: Consumed 2.458s CPU time. Mar 10 01:00:21.592083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:00:21.740362 systemd[1]: Reloading requested from client PID 2163 ('systemctl') (unit session-5.scope)... Mar 10 01:00:21.740491 systemd[1]: Reloading... Mar 10 01:00:22.050404 zram_generator::config[2202]: No configuration found. Mar 10 01:00:22.564070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:00:22.754354 systemd[1]: Reloading finished in 1011 ms. Mar 10 01:00:22.963074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:00:22.974618 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:00:22.993515 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:00:22.994441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:00:23.016434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:00:23.990284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:00:23.998058 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:00:25.062265 kubelet[2252]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:00:25.062265 kubelet[2252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:00:25.065560 kubelet[2252]: I0310 01:00:25.063385 2252 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:00:25.423259 update_engine[1459]: I20260310 01:00:25.420394 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:00:25.447002 update_engine[1459]: I20260310 01:00:25.425086 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:00:25.447002 update_engine[1459]: I20260310 01:00:25.428612 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:00:25.477543 update_engine[1459]: E20260310 01:00:25.475266 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:00:25.477543 update_engine[1459]: I20260310 01:00:25.477203 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 10 01:00:25.477543 update_engine[1459]: I20260310 01:00:25.480479 1459 omaha_request_action.cc:617] Omaha request response: Mar 10 01:00:25.489613 update_engine[1459]: E20260310 01:00:25.488049 1459 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 10 01:00:25.489613 update_engine[1459]: I20260310 01:00:25.489217 1459 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 10 01:00:25.489613 update_engine[1459]: I20260310 01:00:25.489235 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:00:25.489613 update_engine[1459]: I20260310 01:00:25.489246 1459 update_attempter.cc:306] Processing Done. Mar 10 01:00:25.489613 update_engine[1459]: E20260310 01:00:25.489512 1459 update_attempter.cc:619] Update failed. Mar 10 01:00:25.490124 update_engine[1459]: I20260310 01:00:25.489830 1459 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 10 01:00:25.490124 update_engine[1459]: I20260310 01:00:25.489849 1459 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 10 01:00:25.490124 update_engine[1459]: I20260310 01:00:25.489972 1459 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 10 01:00:25.490295 update_engine[1459]: I20260310 01:00:25.490136 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 10 01:00:25.490295 update_engine[1459]: I20260310 01:00:25.490172 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 10 01:00:25.490295 update_engine[1459]: I20260310 01:00:25.490183 1459 omaha_request_action.cc:272] Request: Mar 10 01:00:25.490295 update_engine[1459]: Mar 10 01:00:25.490295 update_engine[1459]: Mar 10 01:00:25.490295 update_engine[1459]: Mar 10 01:00:25.490295 update_engine[1459]: Mar 10 01:00:25.490295 update_engine[1459]: Mar 10 01:00:25.490295 update_engine[1459]: Mar 10 01:00:25.490295 update_engine[1459]: I20260310 01:00:25.490195 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:00:25.491246 update_engine[1459]: I20260310 01:00:25.491019 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:00:25.497262 update_engine[1459]: I20260310 01:00:25.497035 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:00:25.498150 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 10 01:00:25.536297 update_engine[1459]: E20260310 01:00:25.525270 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:00:25.536297 update_engine[1459]: I20260310 01:00:25.534307 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 10 01:00:25.536297 update_engine[1459]: I20260310 01:00:25.534333 1459 omaha_request_action.cc:617] Omaha request response: Mar 10 01:00:25.536297 update_engine[1459]: I20260310 01:00:25.534513 1459 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:00:25.536297 update_engine[1459]: I20260310 01:00:25.534533 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:00:25.536297 update_engine[1459]: I20260310 01:00:25.534544 1459 update_attempter.cc:306] Processing Done. Mar 10 01:00:25.536297 update_engine[1459]: I20260310 01:00:25.534559 1459 update_attempter.cc:310] Error event sent. Mar 10 01:00:25.536297 update_engine[1459]: I20260310 01:00:25.534624 1459 update_check_scheduler.cc:74] Next update check in 45m22s Mar 10 01:00:25.543552 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 10 01:00:27.370321 kubelet[2252]: I0310 01:00:27.369267 2252 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 01:00:27.370321 kubelet[2252]: I0310 01:00:27.369569 2252 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:00:27.370321 kubelet[2252]: I0310 01:00:27.370569 2252 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:00:27.379644 kubelet[2252]: I0310 01:00:27.370587 2252 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:00:27.379644 kubelet[2252]: I0310 01:00:27.376213 2252 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:00:27.578638 kubelet[2252]: E0310 01:00:27.578208 2252 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:00:27.578638 kubelet[2252]: I0310 01:00:27.580421 2252 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:00:27.662034 kubelet[2252]: E0310 01:00:27.661492 2252 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:00:27.662034 kubelet[2252]: I0310 01:00:27.662014 2252 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:00:27.710457 kubelet[2252]: I0310 01:00:27.708371 2252 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:00:27.720194 kubelet[2252]: I0310 01:00:27.716638 2252 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:00:27.720194 kubelet[2252]: I0310 01:00:27.718063 2252 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:00:27.724536 kubelet[2252]: I0310 01:00:27.722448 2252 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:00:27.726195 kubelet[2252]: I0310 01:00:27.725297 2252 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 01:00:27.727578 kubelet[2252]: I0310 01:00:27.727173 2252 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:00:27.745341 kubelet[2252]: I0310 01:00:27.744537 2252 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:00:27.747989 kubelet[2252]: I0310 01:00:27.747071 2252 kubelet.go:475] "Attempting to sync node with API server" Mar 10 01:00:27.747989 kubelet[2252]: I0310 01:00:27.747294 2252 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:00:27.750341 kubelet[2252]: I0310 01:00:27.749276 2252 kubelet.go:387] "Adding apiserver pod source" Mar 10 01:00:27.752491 kubelet[2252]: I0310 01:00:27.751639 2252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:00:27.776146 kubelet[2252]: E0310 01:00:27.772229 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:00:27.776146 kubelet[2252]: E0310 01:00:27.772362 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:00:27.811147 kubelet[2252]: I0310 01:00:27.811103 2252 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:00:27.818441 kubelet[2252]: I0310 01:00:27.817364 2252 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:00:27.818441 kubelet[2252]: I0310 01:00:27.817503 2252 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:00:27.819350 kubelet[2252]: W0310 01:00:27.819112 2252 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 01:00:27.882644 kubelet[2252]: I0310 01:00:27.879994 2252 server.go:1262] "Started kubelet" Mar 10 01:00:27.903346 kubelet[2252]: I0310 01:00:27.898515 2252 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:00:27.903346 kubelet[2252]: I0310 01:00:27.900554 2252 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:00:27.913239 kubelet[2252]: I0310 01:00:27.909967 2252 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:00:27.918506 kubelet[2252]: I0310 01:00:27.918307 2252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:00:27.951428 kubelet[2252]: I0310 01:00:27.950398 2252 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:00:28.159389 kubelet[2252]: I0310 01:00:27.995524 2252 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:00:28.159389 kubelet[2252]: I0310 01:00:28.138190 2252 server.go:310] "Adding debug handlers to kubelet server" Mar 10 01:00:28.162127 kubelet[2252]: I0310 01:00:28.160522 2252 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 01:00:28.181576 kubelet[2252]: E0310 01:00:28.180305 2252 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:00:28.263310 kubelet[2252]: I0310 01:00:28.263261 2252 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:00:28.273441 kubelet[2252]: I0310 01:00:28.273409 2252 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:00:28.274293 kubelet[2252]: E0310 01:00:28.273395 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" Mar 10 01:00:28.293356 kubelet[2252]: E0310 01:00:28.293321 2252 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:00:28.299590 kubelet[2252]: I0310 01:00:28.299561 2252 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:00:28.305440 kubelet[2252]: I0310 01:00:28.305407 2252 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:00:28.345502 kubelet[2252]: E0310 01:00:28.342473 2252 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b5515356a83c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:00:27.879269321 +0000 UTC m=+3.754699854,LastTimestamp:2026-03-10 01:00:27.879269321 +0000 UTC m=+3.754699854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:00:28.346309 kubelet[2252]: E0310 01:00:28.346101 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:00:28.355428 kubelet[2252]: I0310 01:00:28.354630 2252 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:00:28.399157 kubelet[2252]: E0310 01:00:28.398285 2252 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:00:28.410187 kubelet[2252]: E0310 01:00:28.409488 2252 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:00:28.477395 kubelet[2252]: E0310 01:00:28.477145 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" Mar 10 01:00:28.499152 kubelet[2252]: I0310 01:00:28.499108 2252 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:00:28.502354 kubelet[2252]: I0310 01:00:28.502333 2252 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:00:28.502480 kubelet[2252]: I0310 01:00:28.502462 2252 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:00:28.511105 kubelet[2252]: E0310 01:00:28.511069 2252 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:00:28.516196 kubelet[2252]: I0310 01:00:28.515611 2252 policy_none.go:49] "None policy: Start" Mar 10 01:00:28.518436 kubelet[2252]: I0310 01:00:28.516624 2252 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:00:28.520203 kubelet[2252]: I0310 01:00:28.519174 2252 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:00:28.542181 kubelet[2252]: I0310 01:00:28.541204 2252 policy_none.go:47] "Start" Mar 10 01:00:28.565438 kubelet[2252]: I0310 01:00:28.564515 2252 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:00:28.580334 kubelet[2252]: E0310 01:00:28.577287 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:00:28.584481 kubelet[2252]: I0310 01:00:28.584248 2252 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:00:28.587291 kubelet[2252]: I0310 01:00:28.586594 2252 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 01:00:28.587578 kubelet[2252]: I0310 01:00:28.587442 2252 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 01:00:28.590355 kubelet[2252]: E0310 01:00:28.587616 2252 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:00:28.590431 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 01:00:28.592457 kubelet[2252]: E0310 01:00:28.591109 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:00:28.612617 kubelet[2252]: E0310 01:00:28.612340 2252 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:00:28.711441 kubelet[2252]: E0310 01:00:28.700647 2252 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:00:28.714502 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 01:00:28.747251 kubelet[2252]: E0310 01:00:28.738342 2252 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:00:28.755554 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 01:00:28.794499 kubelet[2252]: E0310 01:00:28.791292 2252 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:00:28.794499 kubelet[2252]: I0310 01:00:28.793108 2252 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:00:28.794499 kubelet[2252]: I0310 01:00:28.793223 2252 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:00:28.795518 kubelet[2252]: I0310 01:00:28.795481 2252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:00:28.807298 kubelet[2252]: E0310 01:00:28.805648 2252 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:00:28.807298 kubelet[2252]: E0310 01:00:28.807237 2252 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:00:28.883414 kubelet[2252]: E0310 01:00:28.879211 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" Mar 10 01:00:28.922224 kubelet[2252]: I0310 01:00:28.919531 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:00:28.931158 kubelet[2252]: E0310 01:00:28.930128 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Mar 10 01:00:29.007255 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 10 01:00:29.030180 kubelet[2252]: E0310 01:00:29.025325 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:29.042540 kubelet[2252]: I0310 01:00:29.042506 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/341779218d71c2d172cb07742e7ae7a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"341779218d71c2d172cb07742e7ae7a5\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:00:29.048273 kubelet[2252]: I0310 01:00:29.043260 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/341779218d71c2d172cb07742e7ae7a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"341779218d71c2d172cb07742e7ae7a5\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:00:29.050461 kubelet[2252]: I0310 01:00:29.048640 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/341779218d71c2d172cb07742e7ae7a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"341779218d71c2d172cb07742e7ae7a5\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:00:29.051840 kubelet[2252]: I0310 01:00:29.051817 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:00:29.064257 systemd[1]: Created slice kubepods-burstable-pod341779218d71c2d172cb07742e7ae7a5.slice - libcontainer container kubepods-burstable-pod341779218d71c2d172cb07742e7ae7a5.slice. Mar 10 01:00:29.095385 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 10 01:00:29.127011 kubelet[2252]: E0310 01:00:29.126297 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:29.129256 kubelet[2252]: E0310 01:00:29.128491 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:29.155510 kubelet[2252]: I0310 01:00:29.155038 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:00:29.155510 kubelet[2252]: I0310 01:00:29.155083 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:00:29.155510 kubelet[2252]: I0310 01:00:29.155352 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:00:29.155510 kubelet[2252]: I0310 01:00:29.155383 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:00:29.157405 kubelet[2252]: I0310 01:00:29.156017 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:00:29.163588 kubelet[2252]: I0310 01:00:29.163163 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:00:29.171618 kubelet[2252]: E0310 01:00:29.171431 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Mar 10 01:00:29.326427 kubelet[2252]: E0310 01:00:29.323607 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:00:29.569387 kubelet[2252]: E0310 01:00:29.568290 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:29.693550 kubelet[2252]: E0310 01:00:29.690164 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="1.6s" Mar 10 01:00:29.694151 containerd[1472]: time="2026-03-10T01:00:29.691648034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 10 01:00:29.701340 kubelet[2252]: E0310 01:00:29.700342 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:29.714171 containerd[1472]: time="2026-03-10T01:00:29.714124649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 10 01:00:29.716468 kubelet[2252]: E0310 01:00:29.716067 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:29.717863 containerd[1472]: time="2026-03-10T01:00:29.717602719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:341779218d71c2d172cb07742e7ae7a5,Namespace:kube-system,Attempt:0,}" Mar 10 01:00:29.720327 kubelet[2252]: I0310 01:00:29.720166 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:00:29.721212 kubelet[2252]: E0310 01:00:29.720648 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Mar 10 01:00:29.735229 kubelet[2252]: E0310 01:00:29.734592 2252 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:00:29.809547 kubelet[2252]: E0310 01:00:29.809351 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:00:30.099351 kubelet[2252]: E0310 01:00:30.096402 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:00:30.113220 kubelet[2252]: E0310 01:00:30.112447 2252 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b5515356a83c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:00:27.879269321 +0000 UTC m=+3.754699854,LastTimestamp:2026-03-10 01:00:27.879269321 +0000 UTC m=+3.754699854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:00:30.569581 kubelet[2252]: I0310 01:00:30.569360 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:00:30.569581 kubelet[2252]: E0310 01:00:30.573524 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Mar 10 01:00:30.684164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421509414.mount: Deactivated successfully. Mar 10 01:00:30.737647 containerd[1472]: time="2026-03-10T01:00:30.737437974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:00:30.758401 containerd[1472]: time="2026-03-10T01:00:30.758188502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 10 01:00:30.767153 containerd[1472]: time="2026-03-10T01:00:30.765033362Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:00:30.773633 containerd[1472]: time="2026-03-10T01:00:30.773579421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:00:30.786465 containerd[1472]: time="2026-03-10T01:00:30.785500582Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:00:30.794197 containerd[1472]: time="2026-03-10T01:00:30.791495477Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:00:30.794197 containerd[1472]: time="2026-03-10T01:00:30.791621030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:00:30.809472 containerd[1472]: time="2026-03-10T01:00:30.809110514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:00:30.814338 containerd[1472]: time="2026-03-10T01:00:30.813479402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.095486904s" Mar 10 01:00:30.818572 containerd[1472]: time="2026-03-10T01:00:30.817592825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.125159964s" Mar 10 01:00:30.850626 containerd[1472]: time="2026-03-10T01:00:30.850302955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.135958205s" Mar 10 01:00:31.278634 kubelet[2252]: E0310 01:00:31.278126 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:00:31.300192 kubelet[2252]: E0310 01:00:31.298266 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="3.2s" Mar 10 01:00:31.587114 kubelet[2252]: E0310 01:00:31.585456 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:00:31.609343 kubelet[2252]: E0310 01:00:31.609256 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:00:31.747046 kubelet[2252]: E0310 01:00:31.743453 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:00:32.196313 kubelet[2252]: I0310 01:00:32.196059 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:00:32.197610 kubelet[2252]: E0310 01:00:32.197277 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Mar 10 01:00:32.240449 containerd[1472]: time="2026-03-10T01:00:32.239578783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:00:32.254190 containerd[1472]: time="2026-03-10T01:00:32.241526971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:00:32.254190 containerd[1472]: time="2026-03-10T01:00:32.241606239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:00:32.272204 containerd[1472]: time="2026-03-10T01:00:32.269213115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:00:32.432243 containerd[1472]: time="2026-03-10T01:00:32.429161051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:00:32.432243 containerd[1472]: time="2026-03-10T01:00:32.429616442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:00:32.432243 containerd[1472]: time="2026-03-10T01:00:32.429635727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:00:32.436233 containerd[1472]: time="2026-03-10T01:00:32.436084781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:00:32.478020 containerd[1472]: time="2026-03-10T01:00:32.477323472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:00:32.478020 containerd[1472]: time="2026-03-10T01:00:32.477989447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:00:32.479182 containerd[1472]: time="2026-03-10T01:00:32.478457912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:00:32.479182 containerd[1472]: time="2026-03-10T01:00:32.478611749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:00:32.718501 systemd[1]: Started cri-containerd-8f26e213cc2249720880936bf816ec40e785fe4ef8824e16d307b10eead2ed2e.scope - libcontainer container 8f26e213cc2249720880936bf816ec40e785fe4ef8824e16d307b10eead2ed2e. Mar 10 01:00:32.787261 systemd[1]: Started cri-containerd-a99bea2cd054c7c8b2325075be1a61bbdfae749e83e8fb34fe74d7500f34cf76.scope - libcontainer container a99bea2cd054c7c8b2325075be1a61bbdfae749e83e8fb34fe74d7500f34cf76. Mar 10 01:00:32.825092 systemd[1]: Started cri-containerd-8ed15190f6845570cd280b490febd5e346e3c6c967033fb8f9ac521126af8c9f.scope - libcontainer container 8ed15190f6845570cd280b490febd5e346e3c6c967033fb8f9ac521126af8c9f. Mar 10 01:00:33.326646 containerd[1472]: time="2026-03-10T01:00:33.319527094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ed15190f6845570cd280b490febd5e346e3c6c967033fb8f9ac521126af8c9f\"" Mar 10 01:00:33.340627 kubelet[2252]: E0310 01:00:33.340043 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:33.372426 containerd[1472]: time="2026-03-10T01:00:33.371516477Z" level=info msg="CreateContainer within sandbox \"8ed15190f6845570cd280b490febd5e346e3c6c967033fb8f9ac521126af8c9f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 01:00:33.413176 containerd[1472]: time="2026-03-10T01:00:33.413066606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"a99bea2cd054c7c8b2325075be1a61bbdfae749e83e8fb34fe74d7500f34cf76\"" Mar 10 01:00:33.419032 kubelet[2252]: E0310 01:00:33.418343 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:33.470098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793968044.mount: Deactivated successfully. Mar 10 01:00:33.478290 containerd[1472]: time="2026-03-10T01:00:33.474393644Z" level=info msg="CreateContainer within sandbox \"a99bea2cd054c7c8b2325075be1a61bbdfae749e83e8fb34fe74d7500f34cf76\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 01:00:33.476245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491636176.mount: Deactivated successfully. Mar 10 01:00:33.494114 containerd[1472]: time="2026-03-10T01:00:33.493645038Z" level=info msg="CreateContainer within sandbox \"8ed15190f6845570cd280b490febd5e346e3c6c967033fb8f9ac521126af8c9f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"90d0ab2c3c8cbdb55d59276136d53a424587fc04e75f0df387c2d76416833f2d\"" Mar 10 01:00:33.498234 containerd[1472]: time="2026-03-10T01:00:33.497548247Z" level=info msg="StartContainer for \"90d0ab2c3c8cbdb55d59276136d53a424587fc04e75f0df387c2d76416833f2d\"" Mar 10 01:00:33.542054 containerd[1472]: time="2026-03-10T01:00:33.520043663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:341779218d71c2d172cb07742e7ae7a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f26e213cc2249720880936bf816ec40e785fe4ef8824e16d307b10eead2ed2e\"" Mar 10 01:00:33.542245 kubelet[2252]: E0310 01:00:33.523594 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:33.584649 containerd[1472]: time="2026-03-10T01:00:33.584027077Z" level=info msg="CreateContainer within sandbox \"a99bea2cd054c7c8b2325075be1a61bbdfae749e83e8fb34fe74d7500f34cf76\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5f553becebe52ca71a10a69d4517472f7520b1ebd2cd0b54db028344556224af\"" Mar 10 01:00:33.588641 containerd[1472]: time="2026-03-10T01:00:33.588507707Z" level=info msg="StartContainer for \"5f553becebe52ca71a10a69d4517472f7520b1ebd2cd0b54db028344556224af\"" Mar 10 01:00:33.590512 containerd[1472]: time="2026-03-10T01:00:33.590392688Z" level=info msg="CreateContainer within sandbox \"8f26e213cc2249720880936bf816ec40e785fe4ef8824e16d307b10eead2ed2e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 01:00:33.670133 containerd[1472]: time="2026-03-10T01:00:33.668359844Z" level=info msg="CreateContainer within sandbox \"8f26e213cc2249720880936bf816ec40e785fe4ef8824e16d307b10eead2ed2e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4df9fdb7e0b891757c5ac56e082c71315eb4756d4e6c5bb7732920814393462\"" Mar 10 01:00:33.670565 containerd[1472]: time="2026-03-10T01:00:33.670302501Z" level=info msg="StartContainer for \"f4df9fdb7e0b891757c5ac56e082c71315eb4756d4e6c5bb7732920814393462\"" Mar 10 01:00:33.850551 systemd[1]: Started cri-containerd-90d0ab2c3c8cbdb55d59276136d53a424587fc04e75f0df387c2d76416833f2d.scope - libcontainer container 90d0ab2c3c8cbdb55d59276136d53a424587fc04e75f0df387c2d76416833f2d. Mar 10 01:00:34.067211 systemd[1]: Started cri-containerd-5f553becebe52ca71a10a69d4517472f7520b1ebd2cd0b54db028344556224af.scope - libcontainer container 5f553becebe52ca71a10a69d4517472f7520b1ebd2cd0b54db028344556224af. Mar 10 01:00:34.075371 systemd[1]: Started cri-containerd-f4df9fdb7e0b891757c5ac56e082c71315eb4756d4e6c5bb7732920814393462.scope - libcontainer container f4df9fdb7e0b891757c5ac56e082c71315eb4756d4e6c5bb7732920814393462. Mar 10 01:00:34.190571 kubelet[2252]: E0310 01:00:34.183263 2252 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:00:34.308398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3609633611.mount: Deactivated successfully. Mar 10 01:00:34.503476 kubelet[2252]: E0310 01:00:34.500553 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="6.4s" Mar 10 01:00:34.548457 containerd[1472]: time="2026-03-10T01:00:34.548015058Z" level=info msg="StartContainer for \"90d0ab2c3c8cbdb55d59276136d53a424587fc04e75f0df387c2d76416833f2d\" returns successfully" Mar 10 01:00:34.548457 containerd[1472]: time="2026-03-10T01:00:34.548249144Z" level=info msg="StartContainer for \"f4df9fdb7e0b891757c5ac56e082c71315eb4756d4e6c5bb7732920814393462\" returns successfully" Mar 10 01:00:34.646441 containerd[1472]: time="2026-03-10T01:00:34.642269276Z" level=info msg="StartContainer for \"5f553becebe52ca71a10a69d4517472f7520b1ebd2cd0b54db028344556224af\" returns successfully" Mar 10 01:00:34.864434 kubelet[2252]: E0310 01:00:34.864162 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:34.864559 kubelet[2252]: E0310 01:00:34.864484 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:34.890022 kubelet[2252]: E0310 01:00:34.889233 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:34.890022 kubelet[2252]: E0310 01:00:34.889408 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:34.904424 kubelet[2252]: E0310 01:00:34.904276 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:34.904566 kubelet[2252]: E0310 01:00:34.904532 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:35.441392 kubelet[2252]: I0310 01:00:35.440567 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:00:35.916542 kubelet[2252]: E0310 01:00:35.915125 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:35.923991 kubelet[2252]: E0310 01:00:35.918105 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:35.923991 kubelet[2252]: E0310 01:00:35.918283 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:35.923991 kubelet[2252]: E0310 01:00:35.923589 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:35.925484 kubelet[2252]: E0310 01:00:35.925329 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:35.926630 kubelet[2252]: E0310 01:00:35.926168 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:36.922412 kubelet[2252]: E0310 01:00:36.922186 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:36.922412 kubelet[2252]: E0310 01:00:36.922430 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:36.925294 kubelet[2252]: E0310 01:00:36.923455 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:36.925294 kubelet[2252]: E0310 01:00:36.924072 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:38.828613 kubelet[2252]: E0310 01:00:38.826370 2252 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:00:39.241420 kubelet[2252]: E0310 01:00:39.240380 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:39.241420 kubelet[2252]: E0310 01:00:39.240549 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:39.744027 kubelet[2252]: E0310 01:00:39.743592 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:39.746119 kubelet[2252]: E0310 01:00:39.746043 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:40.726117 kubelet[2252]: E0310 01:00:40.725256 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:40.726117 kubelet[2252]: E0310 01:00:40.725460 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:45.448524 kubelet[2252]: E0310 01:00:45.445475 2252 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 10 01:00:46.325554 kubelet[2252]: E0310 01:00:46.325115 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:00:46.803599 kubelet[2252]: E0310 01:00:46.801245 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:00:46.849355 kubelet[2252]: E0310 01:00:46.848526 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:00:47.581023 kubelet[2252]: E0310 01:00:47.580516 2252 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:00:48.958644 kubelet[2252]: E0310 01:00:48.956455 2252 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:00:49.858051 kubelet[2252]: E0310 01:00:49.856577 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:00:49.863141 kubelet[2252]: E0310 01:00:49.863110 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:00:50.158243 kubelet[2252]: E0310 01:00:50.153293 2252 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189b5515356a83c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:00:27.879269321 +0000 UTC m=+3.754699854,LastTimestamp:2026-03-10 01:00:27.879269321 +0000 UTC m=+3.754699854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:00:50.921101 kubelet[2252]: E0310 01:00:50.920145 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 10 01:00:51.865898 kubelet[2252]: I0310 01:00:51.864582 2252 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:00:52.666595 kubelet[2252]: E0310 01:00:52.604598 2252 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:00:58.993524 kubelet[2252]: E0310 01:00:58.987367 2252 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:01:01.053550 kubelet[2252]: E0310 01:01:01.052581 2252 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:01:01.053550 kubelet[2252]: E0310 01:01:01.053596 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:04.397190 kubelet[2252]: I0310 01:01:04.394130 2252 apiserver.go:52] "Watching apiserver" Mar 10 01:01:04.867281 kubelet[2252]: I0310 01:01:04.864423 2252 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:01:05.063054 kubelet[2252]: E0310 01:01:05.061437 2252 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 01:01:05.364171 kubelet[2252]: I0310 01:01:05.361113 2252 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:01:05.364171 kubelet[2252]: E0310 01:01:05.361160 2252 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 10 01:01:05.364171 kubelet[2252]: E0310 01:01:05.345341 2252 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189b5515356a83c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:00:27.879269321 +0000 UTC m=+3.754699854,LastTimestamp:2026-03-10 01:00:27.879269321 +0000 UTC m=+3.754699854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:01:05.391438 kubelet[2252]: I0310 01:01:05.389124 2252 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:01:05.690385 kubelet[2252]: I0310 01:01:05.690334 2252 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:01:05.703575 kubelet[2252]: E0310 01:01:05.700222 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:05.842058 kubelet[2252]: I0310 01:01:05.838436 2252 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:01:05.842510 kubelet[2252]: E0310 01:01:05.842479 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:05.950304 kubelet[2252]: E0310 01:01:05.948351 2252 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:09.232304 kubelet[2252]: I0310 01:01:09.229429 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.229203584 podStartE2EDuration="4.229203584s" podCreationTimestamp="2026-03-10 01:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:01:09.22723618 +0000 UTC m=+45.102666724" watchObservedRunningTime="2026-03-10 01:01:09.229203584 +0000 UTC m=+45.104634118" Mar 10 01:01:09.307207 kubelet[2252]: I0310 01:01:09.306329 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.306308842 podStartE2EDuration="4.306308842s" podCreationTimestamp="2026-03-10 01:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:01:09.301484222 +0000 UTC m=+45.176914735" watchObservedRunningTime="2026-03-10 01:01:09.306308842 +0000 UTC m=+45.181739356" Mar 10 01:01:09.387895 kubelet[2252]: I0310 01:01:09.387567 2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.387547264 podStartE2EDuration="4.387547264s" podCreationTimestamp="2026-03-10 01:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:01:09.386935168 +0000 UTC m=+45.262365681" watchObservedRunningTime="2026-03-10 01:01:09.387547264 +0000 UTC m=+45.262977797" Mar 10 01:01:13.839640 systemd[1]: Reloading requested from client PID 2547 ('systemctl') (unit session-5.scope)... Mar 10 01:01:13.839809 systemd[1]: Reloading... Mar 10 01:01:14.023837 zram_generator::config[2589]: No configuration found. Mar 10 01:01:15.995344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:01:16.487752 systemd[1]: Reloading finished in 2647 ms. Mar 10 01:01:16.889379 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:01:16.915995 kubelet[2252]: I0310 01:01:16.903575 2252 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:01:17.033166 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:01:17.035241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:01:17.035568 systemd[1]: kubelet.service: Consumed 26.965s CPU time, 132.2M memory peak, 0B memory swap peak. Mar 10 01:01:17.098024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:01:18.020458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:01:18.036254 (kubelet)[2630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:01:18.208588 kubelet[2630]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:01:18.208588 kubelet[2630]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:01:18.208588 kubelet[2630]: I0310 01:01:18.208476 2630 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:01:18.263493 kubelet[2630]: I0310 01:01:18.263088 2630 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 01:01:18.263493 kubelet[2630]: I0310 01:01:18.263194 2630 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:01:18.263493 kubelet[2630]: I0310 01:01:18.263237 2630 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:01:18.263493 kubelet[2630]: I0310 01:01:18.263255 2630 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:01:18.269026 kubelet[2630]: I0310 01:01:18.268980 2630 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:01:18.277269 kubelet[2630]: I0310 01:01:18.276562 2630 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 01:01:18.286044 kubelet[2630]: I0310 01:01:18.285773 2630 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:01:18.297543 kubelet[2630]: E0310 01:01:18.297016 2630 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:01:18.297543 kubelet[2630]: I0310 01:01:18.297494 2630 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:01:18.316178 kubelet[2630]: I0310 01:01:18.316044 2630 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:01:18.316841 kubelet[2630]: I0310 01:01:18.316374 2630 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:01:18.316841 kubelet[2630]: I0310 01:01:18.316421 2630 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:01:18.316841 kubelet[2630]: I0310 01:01:18.316617 2630 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:01:18.316841 kubelet[2630]: I0310 01:01:18.316630 2630 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 01:01:18.324869 kubelet[2630]: I0310 01:01:18.317348 2630 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:01:18.343407 kubelet[2630]: I0310 01:01:18.338567 2630 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:01:18.343407 kubelet[2630]: I0310 01:01:18.345279 2630 kubelet.go:475] "Attempting to sync node with API server" Mar 10 01:01:18.343407 kubelet[2630]: I0310 01:01:18.345303 2630 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:01:18.343407 kubelet[2630]: I0310 01:01:18.345361 2630 kubelet.go:387] "Adding apiserver pod source" Mar 10 01:01:18.343407 kubelet[2630]: I0310 01:01:18.345382 2630 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:01:18.401535 kubelet[2630]: I0310 01:01:18.400210 2630 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:01:18.424964 kubelet[2630]: I0310 01:01:18.424304 2630 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:01:18.424964 kubelet[2630]: I0310 01:01:18.424434 2630 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:01:18.470244 kubelet[2630]: I0310 01:01:18.470185 2630 server.go:1262] "Started kubelet" Mar 10 01:01:18.474071 kubelet[2630]: I0310 01:01:18.470445 2630 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:01:18.474071 kubelet[2630]: I0310 01:01:18.474000 2630 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:01:18.474071 kubelet[2630]: I0310 01:01:18.474053 2630 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:01:18.478979 kubelet[2630]: I0310 01:01:18.478327 2630 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:01:18.501498 kubelet[2630]: I0310 01:01:18.501418 2630 server.go:310] "Adding debug handlers to kubelet server" Mar 10 01:01:18.506605 kubelet[2630]: I0310 01:01:18.502044 2630 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:01:18.509542 kubelet[2630]: I0310 01:01:18.503242 2630 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:01:18.515845 kubelet[2630]: I0310 01:01:18.514251 2630 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 01:01:18.515845 kubelet[2630]: I0310 01:01:18.514341 2630 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:01:18.515845 kubelet[2630]: I0310 01:01:18.514553 2630 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:01:18.551508 kubelet[2630]: E0310 01:01:18.550151 2630 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:01:18.652529 kubelet[2630]: I0310 01:01:18.652242 2630 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:01:18.656535 kubelet[2630]: I0310 01:01:18.655212 2630 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:01:18.656535 kubelet[2630]: I0310 01:01:18.655542 2630 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:01:18.805403 kubelet[2630]: I0310 01:01:18.802118 2630 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:01:18.845873 kubelet[2630]: I0310 01:01:18.845826 2630 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:01:18.846436 kubelet[2630]: I0310 01:01:18.846412 2630 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 01:01:18.863160 kubelet[2630]: I0310 01:01:18.860506 2630 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 01:01:18.863160 kubelet[2630]: E0310 01:01:18.861032 2630 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:01:18.963302 kubelet[2630]: E0310 01:01:18.962225 2630 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.002032 2630 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.002069 2630 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.002099 2630 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.002813 2630 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.002833 2630 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.002873 2630 policy_none.go:49] "None policy: Start" Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.002891 2630 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.002987 2630 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.003127 2630 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 01:01:19.003641 kubelet[2630]: I0310 01:01:19.003143 2630 policy_none.go:47] "Start" Mar 10 01:01:19.018993 kubelet[2630]: E0310 01:01:19.018485 2630 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:01:19.019224 kubelet[2630]: I0310 01:01:19.019191 2630 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:01:19.019319 kubelet[2630]: I0310 01:01:19.019213 2630 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:01:19.021495 kubelet[2630]: I0310 01:01:19.020342 2630 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:01:19.024525 kubelet[2630]: E0310 01:01:19.024320 2630 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:01:19.171091 kubelet[2630]: I0310 01:01:19.171039 2630 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:01:19.173309 kubelet[2630]: I0310 01:01:19.173264 2630 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:01:19.190600 kubelet[2630]: I0310 01:01:19.190406 2630 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:01:19.290986 kubelet[2630]: I0310 01:01:19.290526 2630 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:01:19.354178 kubelet[2630]: I0310 01:01:19.351180 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/341779218d71c2d172cb07742e7ae7a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"341779218d71c2d172cb07742e7ae7a5\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:01:19.354178 kubelet[2630]: I0310 01:01:19.351333 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:01:19.354178 kubelet[2630]: I0310 01:01:19.351439 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:01:19.354178 kubelet[2630]: I0310 01:01:19.351607 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/341779218d71c2d172cb07742e7ae7a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"341779218d71c2d172cb07742e7ae7a5\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:01:19.354178 kubelet[2630]: I0310 01:01:19.352058 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:01:19.356565 kubelet[2630]: I0310 01:01:19.352089 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:01:19.356565 kubelet[2630]: I0310 01:01:19.352125 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:01:19.356565 kubelet[2630]: I0310 01:01:19.352147 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:01:19.356565 kubelet[2630]: I0310 01:01:19.352166 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/341779218d71c2d172cb07742e7ae7a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"341779218d71c2d172cb07742e7ae7a5\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:01:19.403130 kubelet[2630]: I0310 01:01:19.402544 2630 apiserver.go:52] "Watching apiserver" Mar 10 01:01:19.405387 kubelet[2630]: E0310 01:01:19.405018 2630 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:01:19.411825 kubelet[2630]: E0310 01:01:19.405790 2630 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 10 01:01:19.411825 kubelet[2630]: E0310 01:01:19.407797 2630 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 01:01:19.513526 kubelet[2630]: I0310 01:01:19.511005 2630 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 10 01:01:19.513526 kubelet[2630]: I0310 01:01:19.511494 2630 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:01:19.518572 kubelet[2630]: I0310 01:01:19.518198 2630 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:01:19.720054 kubelet[2630]: E0310 01:01:19.708532 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:19.723022 kubelet[2630]: E0310 01:01:19.717180 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:19.726978 kubelet[2630]: E0310 01:01:19.726356 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:19.989104 kubelet[2630]: E0310 01:01:19.988642 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:19.993236 kubelet[2630]: E0310 01:01:19.993100 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:19.994849 kubelet[2630]: E0310 01:01:19.994458 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:21.299185 kubelet[2630]: E0310 01:01:21.288129 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:21.398840 kubelet[2630]: E0310 01:01:21.398514 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:22.697783 kubelet[2630]: E0310 01:01:22.696103 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:23.709842 kubelet[2630]: E0310 01:01:23.708451 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:24.296563 kubelet[2630]: I0310 01:01:24.296287 2630 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 01:01:24.300788 containerd[1472]: time="2026-03-10T01:01:24.298609886Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 01:01:24.312037 kubelet[2630]: I0310 01:01:24.309243 2630 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 01:01:24.971448 systemd[1]: Created slice kubepods-besteffort-pod9b5de0c2_6768_473c_aa92_d2bdeec4dfdd.slice - libcontainer container kubepods-besteffort-pod9b5de0c2_6768_473c_aa92_d2bdeec4dfdd.slice. Mar 10 01:01:25.098593 kubelet[2630]: I0310 01:01:25.098397 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b5de0c2-6768-473c-aa92-d2bdeec4dfdd-lib-modules\") pod \"kube-proxy-g8kk5\" (UID: \"9b5de0c2-6768-473c-aa92-d2bdeec4dfdd\") " pod="kube-system/kube-proxy-g8kk5" Mar 10 01:01:25.098593 kubelet[2630]: I0310 01:01:25.098502 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdvd\" (UniqueName: \"kubernetes.io/projected/9b5de0c2-6768-473c-aa92-d2bdeec4dfdd-kube-api-access-8zdvd\") pod \"kube-proxy-g8kk5\" (UID: \"9b5de0c2-6768-473c-aa92-d2bdeec4dfdd\") " pod="kube-system/kube-proxy-g8kk5" Mar 10 01:01:25.098593 kubelet[2630]: I0310 01:01:25.098529 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b5de0c2-6768-473c-aa92-d2bdeec4dfdd-kube-proxy\") pod \"kube-proxy-g8kk5\" (UID: \"9b5de0c2-6768-473c-aa92-d2bdeec4dfdd\") " pod="kube-system/kube-proxy-g8kk5" Mar 10 01:01:25.098593 kubelet[2630]: I0310 01:01:25.098544 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b5de0c2-6768-473c-aa92-d2bdeec4dfdd-xtables-lock\") pod \"kube-proxy-g8kk5\" (UID: \"9b5de0c2-6768-473c-aa92-d2bdeec4dfdd\") " pod="kube-system/kube-proxy-g8kk5" Mar 10 01:01:26.200723 kubelet[2630]: E0310 01:01:26.200146 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:26.206379 containerd[1472]: time="2026-03-10T01:01:26.206304424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8kk5,Uid:9b5de0c2-6768-473c-aa92-d2bdeec4dfdd,Namespace:kube-system,Attempt:0,}" Mar 10 01:01:26.701816 kubelet[2630]: E0310 01:01:26.701465 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:26.984277 kubelet[2630]: E0310 01:01:26.981249 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:28.316144 kubelet[2630]: E0310 01:01:28.307157 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:28.396589 containerd[1472]: time="2026-03-10T01:01:28.390492359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:01:28.396589 containerd[1472]: time="2026-03-10T01:01:28.391565964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:01:28.396589 containerd[1472]: time="2026-03-10T01:01:28.391592484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:01:28.396589 containerd[1472]: time="2026-03-10T01:01:28.392128675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:01:29.264318 systemd[1]: Started cri-containerd-e729cabcbceea9bb285c1563e31982afa04cfda300352a0d10d587a7c05fb0ef.scope - libcontainer container e729cabcbceea9bb285c1563e31982afa04cfda300352a0d10d587a7c05fb0ef. Mar 10 01:01:29.290162 systemd[1]: Created slice kubepods-burstable-pod4a4a68ce_8fbb_411b_9e7c_18f79849170d.slice - libcontainer container kubepods-burstable-pod4a4a68ce_8fbb_411b_9e7c_18f79849170d.slice. Mar 10 01:01:29.385466 kubelet[2630]: I0310 01:01:29.385049 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4a4a68ce-8fbb-411b-9e7c-18f79849170d-run\") pod \"kube-flannel-ds-cd9hs\" (UID: \"4a4a68ce-8fbb-411b-9e7c-18f79849170d\") " pod="kube-flannel/kube-flannel-ds-cd9hs" Mar 10 01:01:29.385466 kubelet[2630]: I0310 01:01:29.385110 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/4a4a68ce-8fbb-411b-9e7c-18f79849170d-cni\") pod \"kube-flannel-ds-cd9hs\" (UID: \"4a4a68ce-8fbb-411b-9e7c-18f79849170d\") " pod="kube-flannel/kube-flannel-ds-cd9hs" Mar 10 01:01:29.385466 kubelet[2630]: I0310 01:01:29.385137 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/4a4a68ce-8fbb-411b-9e7c-18f79849170d-flannel-cfg\") pod \"kube-flannel-ds-cd9hs\" (UID: \"4a4a68ce-8fbb-411b-9e7c-18f79849170d\") " pod="kube-flannel/kube-flannel-ds-cd9hs" Mar 10 01:01:29.385466 kubelet[2630]: I0310 01:01:29.385309 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjlf\" (UniqueName: \"kubernetes.io/projected/4a4a68ce-8fbb-411b-9e7c-18f79849170d-kube-api-access-dfjlf\") pod \"kube-flannel-ds-cd9hs\" (UID: \"4a4a68ce-8fbb-411b-9e7c-18f79849170d\") " pod="kube-flannel/kube-flannel-ds-cd9hs" Mar 10 01:01:29.385466 kubelet[2630]: I0310 01:01:29.385353 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/4a4a68ce-8fbb-411b-9e7c-18f79849170d-cni-plugin\") pod \"kube-flannel-ds-cd9hs\" (UID: \"4a4a68ce-8fbb-411b-9e7c-18f79849170d\") " pod="kube-flannel/kube-flannel-ds-cd9hs" Mar 10 01:01:29.386619 kubelet[2630]: I0310 01:01:29.385376 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a4a68ce-8fbb-411b-9e7c-18f79849170d-xtables-lock\") pod \"kube-flannel-ds-cd9hs\" (UID: \"4a4a68ce-8fbb-411b-9e7c-18f79849170d\") " pod="kube-flannel/kube-flannel-ds-cd9hs" Mar 10 01:01:29.710131 kubelet[2630]: E0310 01:01:29.702885 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:29.714009 containerd[1472]: time="2026-03-10T01:01:29.713122872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cd9hs,Uid:4a4a68ce-8fbb-411b-9e7c-18f79849170d,Namespace:kube-flannel,Attempt:0,}" Mar 10 01:01:29.760787 containerd[1472]: time="2026-03-10T01:01:29.760345565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8kk5,Uid:9b5de0c2-6768-473c-aa92-d2bdeec4dfdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e729cabcbceea9bb285c1563e31982afa04cfda300352a0d10d587a7c05fb0ef\"" Mar 10 01:01:29.766875 kubelet[2630]: E0310 01:01:29.765551 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:29.791871 containerd[1472]: time="2026-03-10T01:01:29.791453211Z" level=info msg="CreateContainer within sandbox \"e729cabcbceea9bb285c1563e31982afa04cfda300352a0d10d587a7c05fb0ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 01:01:30.088598 sudo[1603]: pam_unix(sudo:session): session closed for user root Mar 10 01:01:30.110153 kubelet[2630]: E0310 01:01:30.106281 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:30.114231 sshd[1600]: pam_unix(sshd:session): session closed for user core Mar 10 01:01:30.147139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014906742.mount: Deactivated successfully. Mar 10 01:01:30.158207 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:58420.service: Deactivated successfully. Mar 10 01:01:30.173614 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 01:01:30.176184 systemd[1]: session-5.scope: Consumed 44.636s CPU time, 166.4M memory peak, 0B memory swap peak. Mar 10 01:01:30.177876 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Mar 10 01:01:30.194876 containerd[1472]: time="2026-03-10T01:01:30.192579677Z" level=info msg="CreateContainer within sandbox \"e729cabcbceea9bb285c1563e31982afa04cfda300352a0d10d587a7c05fb0ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"af5d709d7f4384e8b4742a4ee36bb0fd7ed7dcbf265503b88da7953525f4e4bf\"" Mar 10 01:01:30.198327 systemd-logind[1455]: Removed session 5. Mar 10 01:01:30.209037 containerd[1472]: time="2026-03-10T01:01:30.208541890Z" level=info msg="StartContainer for \"af5d709d7f4384e8b4742a4ee36bb0fd7ed7dcbf265503b88da7953525f4e4bf\"" Mar 10 01:01:30.286096 containerd[1472]: time="2026-03-10T01:01:30.282185349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:01:30.286096 containerd[1472]: time="2026-03-10T01:01:30.285066300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:01:30.286096 containerd[1472]: time="2026-03-10T01:01:30.285092810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:01:30.286096 containerd[1472]: time="2026-03-10T01:01:30.285308222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:01:30.434546 systemd[1]: Started cri-containerd-af5d709d7f4384e8b4742a4ee36bb0fd7ed7dcbf265503b88da7953525f4e4bf.scope - libcontainer container af5d709d7f4384e8b4742a4ee36bb0fd7ed7dcbf265503b88da7953525f4e4bf. Mar 10 01:01:30.481366 systemd[1]: Started cri-containerd-485361fd639b7642cc67e20b3c151da9508fbe3a42bd9021e4a8fce5704ca92b.scope - libcontainer container 485361fd639b7642cc67e20b3c151da9508fbe3a42bd9021e4a8fce5704ca92b. Mar 10 01:01:30.677047 containerd[1472]: time="2026-03-10T01:01:30.675526431Z" level=info msg="StartContainer for \"af5d709d7f4384e8b4742a4ee36bb0fd7ed7dcbf265503b88da7953525f4e4bf\" returns successfully" Mar 10 01:01:37.466295 kubelet[2630]: E0310 01:01:37.457292 2630 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.614s" Mar 10 01:01:37.749778 containerd[1472]: time="2026-03-10T01:01:37.716172138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cd9hs,Uid:4a4a68ce-8fbb-411b-9e7c-18f79849170d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"485361fd639b7642cc67e20b3c151da9508fbe3a42bd9021e4a8fce5704ca92b\"" Mar 10 01:01:37.848179 kubelet[2630]: E0310 01:01:37.847875 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:37.873229 kubelet[2630]: E0310 01:01:37.870251 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:37.903846 containerd[1472]: time="2026-03-10T01:01:37.901135125Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 10 01:01:37.917434 kubelet[2630]: I0310 01:01:37.915398 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g8kk5" podStartSLOduration=13.914597852 podStartE2EDuration="13.914597852s" podCreationTimestamp="2026-03-10 01:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:01:37.908135533 +0000 UTC m=+19.860775378" watchObservedRunningTime="2026-03-10 01:01:37.914597852 +0000 UTC m=+19.867237708" Mar 10 01:01:38.546592 kubelet[2630]: E0310 01:01:38.545473 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:39.738093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733653152.mount: Deactivated successfully. Mar 10 01:01:40.232419 containerd[1472]: time="2026-03-10T01:01:40.231518798Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:01:40.241254 containerd[1472]: time="2026-03-10T01:01:40.239486060Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 10 01:01:40.261135 containerd[1472]: time="2026-03-10T01:01:40.254256811Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:01:40.273795 containerd[1472]: time="2026-03-10T01:01:40.272788080Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:01:40.274228 containerd[1472]: time="2026-03-10T01:01:40.274194609Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 2.369324662s" Mar 10 01:01:40.274322 containerd[1472]: time="2026-03-10T01:01:40.274302250Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 10 01:01:40.426455 containerd[1472]: time="2026-03-10T01:01:40.421966184Z" level=info msg="CreateContainer within sandbox \"485361fd639b7642cc67e20b3c151da9508fbe3a42bd9021e4a8fce5704ca92b\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 10 01:01:40.602589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472194722.mount: Deactivated successfully. Mar 10 01:01:40.648201 containerd[1472]: time="2026-03-10T01:01:40.647580309Z" level=info msg="CreateContainer within sandbox \"485361fd639b7642cc67e20b3c151da9508fbe3a42bd9021e4a8fce5704ca92b\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"892108bd79a7b9c03a9fb728d240e47d10a743342c048cbde94638976b7f4630\"" Mar 10 01:01:40.661362 containerd[1472]: time="2026-03-10T01:01:40.660981078Z" level=info msg="StartContainer for \"892108bd79a7b9c03a9fb728d240e47d10a743342c048cbde94638976b7f4630\"" Mar 10 01:01:41.171304 systemd[1]: Started cri-containerd-892108bd79a7b9c03a9fb728d240e47d10a743342c048cbde94638976b7f4630.scope - libcontainer container 892108bd79a7b9c03a9fb728d240e47d10a743342c048cbde94638976b7f4630. Mar 10 01:01:41.460813 containerd[1472]: time="2026-03-10T01:01:41.452835092Z" level=info msg="StartContainer for \"892108bd79a7b9c03a9fb728d240e47d10a743342c048cbde94638976b7f4630\" returns successfully" Mar 10 01:01:41.502851 systemd[1]: cri-containerd-892108bd79a7b9c03a9fb728d240e47d10a743342c048cbde94638976b7f4630.scope: Deactivated successfully. Mar 10 01:01:41.586215 kubelet[2630]: E0310 01:01:41.586017 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:42.314332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-892108bd79a7b9c03a9fb728d240e47d10a743342c048cbde94638976b7f4630-rootfs.mount: Deactivated successfully. Mar 10 01:01:42.431529 containerd[1472]: time="2026-03-10T01:01:42.427300761Z" level=info msg="shim disconnected" id=892108bd79a7b9c03a9fb728d240e47d10a743342c048cbde94638976b7f4630 namespace=k8s.io Mar 10 01:01:42.431529 containerd[1472]: time="2026-03-10T01:01:42.427461800Z" level=warning msg="cleaning up after shim disconnected" id=892108bd79a7b9c03a9fb728d240e47d10a743342c048cbde94638976b7f4630 namespace=k8s.io Mar 10 01:01:42.431529 containerd[1472]: time="2026-03-10T01:01:42.427478982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:01:42.774851 kubelet[2630]: E0310 01:01:42.774298 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:43.902632 kubelet[2630]: E0310 01:01:43.902123 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:43.920152 containerd[1472]: time="2026-03-10T01:01:43.918413973Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 10 01:01:52.645644 containerd[1472]: time="2026-03-10T01:01:52.645414154Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:01:52.654278 containerd[1472]: time="2026-03-10T01:01:52.651992868Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 10 01:01:52.663987 containerd[1472]: time="2026-03-10T01:01:52.661641880Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:01:52.682165 containerd[1472]: time="2026-03-10T01:01:52.682106119Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:01:52.687237 containerd[1472]: time="2026-03-10T01:01:52.686506612Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 8.768038107s" Mar 10 01:01:52.687237 containerd[1472]: time="2026-03-10T01:01:52.686613510Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 10 01:01:52.746820 containerd[1472]: time="2026-03-10T01:01:52.743120012Z" level=info msg="CreateContainer within sandbox \"485361fd639b7642cc67e20b3c151da9508fbe3a42bd9021e4a8fce5704ca92b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 10 01:01:52.848127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689363642.mount: Deactivated successfully. Mar 10 01:01:52.879019 containerd[1472]: time="2026-03-10T01:01:52.872388891Z" level=info msg="CreateContainer within sandbox \"485361fd639b7642cc67e20b3c151da9508fbe3a42bd9021e4a8fce5704ca92b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501\"" Mar 10 01:01:52.879019 containerd[1472]: time="2026-03-10T01:01:52.877450020Z" level=info msg="StartContainer for \"a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501\"" Mar 10 01:01:54.326273 systemd[1]: Started cri-containerd-a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501.scope - libcontainer container a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501. Mar 10 01:01:56.392178 systemd[1]: cri-containerd-a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501.scope: Deactivated successfully. Mar 10 01:01:56.395505 systemd[1]: cri-containerd-a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501.scope: Consumed 1.457s CPU time. Mar 10 01:01:56.406638 containerd[1472]: time="2026-03-10T01:01:56.406231619Z" level=info msg="StartContainer for \"a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501\" returns successfully" Mar 10 01:01:56.554084 kubelet[2630]: I0310 01:01:56.544090 2630 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 10 01:01:57.084235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501-rootfs.mount: Deactivated successfully. Mar 10 01:01:57.152043 kubelet[2630]: E0310 01:01:57.132390 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:57.578244 containerd[1472]: time="2026-03-10T01:01:57.558105259Z" level=info msg="shim disconnected" id=a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501 namespace=k8s.io Mar 10 01:01:57.578244 containerd[1472]: time="2026-03-10T01:01:57.561474686Z" level=warning msg="cleaning up after shim disconnected" id=a61566cc81a9ab82d1b127f573fe158b50cd8ec55369118d1f59ed909ef21501 namespace=k8s.io Mar 10 01:01:57.578244 containerd[1472]: time="2026-03-10T01:01:57.561507738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:01:57.621553 kubelet[2630]: I0310 01:01:57.620921 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7336182e-6eca-4f69-b7d8-1bf642a38200-config-volume\") pod \"coredns-66bc5c9577-6bt6m\" (UID: \"7336182e-6eca-4f69-b7d8-1bf642a38200\") " pod="kube-system/coredns-66bc5c9577-6bt6m" Mar 10 01:01:57.621553 kubelet[2630]: I0310 01:01:57.621239 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/151e6af8-b338-4f43-bb3c-6c89d4dcc0f8-config-volume\") pod \"coredns-66bc5c9577-54xdt\" (UID: \"151e6af8-b338-4f43-bb3c-6c89d4dcc0f8\") " pod="kube-system/coredns-66bc5c9577-54xdt" Mar 10 01:01:57.621553 kubelet[2630]: I0310 01:01:57.621276 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8ql\" (UniqueName: \"kubernetes.io/projected/7336182e-6eca-4f69-b7d8-1bf642a38200-kube-api-access-zc8ql\") pod \"coredns-66bc5c9577-6bt6m\" (UID: \"7336182e-6eca-4f69-b7d8-1bf642a38200\") " pod="kube-system/coredns-66bc5c9577-6bt6m" Mar 10 01:01:57.621553 kubelet[2630]: I0310 01:01:57.621381 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv4rj\" (UniqueName: \"kubernetes.io/projected/151e6af8-b338-4f43-bb3c-6c89d4dcc0f8-kube-api-access-qv4rj\") pod \"coredns-66bc5c9577-54xdt\" (UID: \"151e6af8-b338-4f43-bb3c-6c89d4dcc0f8\") " pod="kube-system/coredns-66bc5c9577-54xdt" Mar 10 01:01:57.681054 systemd[1]: Created slice kubepods-burstable-pod7336182e_6eca_4f69_b7d8_1bf642a38200.slice - libcontainer container kubepods-burstable-pod7336182e_6eca_4f69_b7d8_1bf642a38200.slice. Mar 10 01:01:57.696219 systemd[1]: Created slice kubepods-burstable-pod151e6af8_b338_4f43_bb3c_6c89d4dcc0f8.slice - libcontainer container kubepods-burstable-pod151e6af8_b338_4f43_bb3c_6c89d4dcc0f8.slice. Mar 10 01:01:58.086359 kubelet[2630]: E0310 01:01:58.085409 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:58.140041 containerd[1472]: time="2026-03-10T01:01:58.136927617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-54xdt,Uid:151e6af8-b338-4f43-bb3c-6c89d4dcc0f8,Namespace:kube-system,Attempt:0,}" Mar 10 01:01:58.149090 kubelet[2630]: E0310 01:01:58.147401 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:58.165135 containerd[1472]: time="2026-03-10T01:01:58.152437471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6bt6m,Uid:7336182e-6eca-4f69-b7d8-1bf642a38200,Namespace:kube-system,Attempt:0,}" Mar 10 01:01:58.364954 kubelet[2630]: E0310 01:01:58.363276 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:01:58.394150 containerd[1472]: time="2026-03-10T01:01:58.393967591Z" level=info msg="CreateContainer within sandbox \"485361fd639b7642cc67e20b3c151da9508fbe3a42bd9021e4a8fce5704ca92b\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 10 01:01:58.788150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2409798224.mount: Deactivated successfully. Mar 10 01:01:58.922543 containerd[1472]: time="2026-03-10T01:01:58.922236842Z" level=info msg="CreateContainer within sandbox \"485361fd639b7642cc67e20b3c151da9508fbe3a42bd9021e4a8fce5704ca92b\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"d65e639b3ed3973a33ee64dcde973f27566945a367ccb1c40b7f359b9a000e95\"" Mar 10 01:01:59.081967 containerd[1472]: time="2026-03-10T01:01:59.065141751Z" level=info msg="StartContainer for \"d65e639b3ed3973a33ee64dcde973f27566945a367ccb1c40b7f359b9a000e95\"" Mar 10 01:01:59.616123 systemd[1]: run-netns-cni\x2dcf7168d6\x2d44e2\x2d7ae7\x2d0716\x2d3b6f05275dd1.mount: Deactivated successfully. Mar 10 01:01:59.616284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d86077e4c563b6011dd55461a96996007c2b8db6d6401cb596ef5d37f9466156-shm.mount: Deactivated successfully. Mar 10 01:01:59.645592 containerd[1472]: time="2026-03-10T01:01:59.645412066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6bt6m,Uid:7336182e-6eca-4f69-b7d8-1bf642a38200,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d86077e4c563b6011dd55461a96996007c2b8db6d6401cb596ef5d37f9466156\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 10 01:01:59.648978 kubelet[2630]: E0310 01:01:59.647321 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d86077e4c563b6011dd55461a96996007c2b8db6d6401cb596ef5d37f9466156\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 10 01:01:59.648978 kubelet[2630]: E0310 01:01:59.648298 2630 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d86077e4c563b6011dd55461a96996007c2b8db6d6401cb596ef5d37f9466156\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-6bt6m" Mar 10 01:01:59.648978 kubelet[2630]: E0310 01:01:59.648506 2630 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d86077e4c563b6011dd55461a96996007c2b8db6d6401cb596ef5d37f9466156\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-6bt6m" Mar 10 01:01:59.656164 kubelet[2630]: E0310 01:01:59.656081 2630 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6bt6m_kube-system(7336182e-6eca-4f69-b7d8-1bf642a38200)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6bt6m_kube-system(7336182e-6eca-4f69-b7d8-1bf642a38200)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d86077e4c563b6011dd55461a96996007c2b8db6d6401cb596ef5d37f9466156\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-6bt6m" podUID="7336182e-6eca-4f69-b7d8-1bf642a38200" Mar 10 01:01:59.678208 systemd[1]: Started cri-containerd-d65e639b3ed3973a33ee64dcde973f27566945a367ccb1c40b7f359b9a000e95.scope - libcontainer container d65e639b3ed3973a33ee64dcde973f27566945a367ccb1c40b7f359b9a000e95. Mar 10 01:01:59.882460 containerd[1472]: time="2026-03-10T01:01:59.879968687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-54xdt,Uid:151e6af8-b338-4f43-bb3c-6c89d4dcc0f8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbdb98cf37f11662f42c07c2f3e27dcd18f88e3d1f8ff48bf8a8e0ea2acca392\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 10 01:01:59.886342 kubelet[2630]: E0310 01:01:59.884503 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbdb98cf37f11662f42c07c2f3e27dcd18f88e3d1f8ff48bf8a8e0ea2acca392\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 10 01:01:59.888956 kubelet[2630]: E0310 01:01:59.887496 2630 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbdb98cf37f11662f42c07c2f3e27dcd18f88e3d1f8ff48bf8a8e0ea2acca392\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-54xdt" Mar 10 01:01:59.888956 kubelet[2630]: E0310 01:01:59.887625 2630 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbdb98cf37f11662f42c07c2f3e27dcd18f88e3d1f8ff48bf8a8e0ea2acca392\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-54xdt" Mar 10 01:01:59.891922 kubelet[2630]: E0310 01:01:59.890456 2630 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-54xdt_kube-system(151e6af8-b338-4f43-bb3c-6c89d4dcc0f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-54xdt_kube-system(151e6af8-b338-4f43-bb3c-6c89d4dcc0f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbdb98cf37f11662f42c07c2f3e27dcd18f88e3d1f8ff48bf8a8e0ea2acca392\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-54xdt" podUID="151e6af8-b338-4f43-bb3c-6c89d4dcc0f8" Mar 10 01:02:00.119367 containerd[1472]: time="2026-03-10T01:02:00.118105792Z" level=info msg="StartContainer for \"d65e639b3ed3973a33ee64dcde973f27566945a367ccb1c40b7f359b9a000e95\" returns successfully" Mar 10 01:02:00.394266 systemd[1]: run-netns-cni\x2de362dbda\x2de545\x2d2cf8\x2dbb45\x2dbb7f089bc595.mount: Deactivated successfully. Mar 10 01:02:00.394436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbdb98cf37f11662f42c07c2f3e27dcd18f88e3d1f8ff48bf8a8e0ea2acca392-shm.mount: Deactivated successfully. Mar 10 01:02:00.592923 kubelet[2630]: E0310 01:02:00.590374 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:00.691312 kubelet[2630]: I0310 01:02:00.683982 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-cd9hs" podStartSLOduration=16.870413127 podStartE2EDuration="31.683962714s" podCreationTimestamp="2026-03-10 01:01:29 +0000 UTC" firstStartedPulling="2026-03-10 01:01:37.890978773 +0000 UTC m=+19.843618608" lastFinishedPulling="2026-03-10 01:01:52.70452836 +0000 UTC m=+34.657168195" observedRunningTime="2026-03-10 01:02:00.683007284 +0000 UTC m=+42.635647139" watchObservedRunningTime="2026-03-10 01:02:00.683962714 +0000 UTC m=+42.636602549" Mar 10 01:02:01.825251 systemd-networkd[1407]: flannel.1: Link UP Mar 10 01:02:01.825341 systemd-networkd[1407]: flannel.1: Gained carrier Mar 10 01:02:03.253248 systemd-networkd[1407]: flannel.1: Gained IPv6LL Mar 10 01:02:11.909470 kubelet[2630]: E0310 01:02:11.908189 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:11.916421 kubelet[2630]: E0310 01:02:11.916086 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:11.920125 containerd[1472]: time="2026-03-10T01:02:11.920051169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-54xdt,Uid:151e6af8-b338-4f43-bb3c-6c89d4dcc0f8,Namespace:kube-system,Attempt:0,}" Mar 10 01:02:11.949443 containerd[1472]: time="2026-03-10T01:02:11.929199941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6bt6m,Uid:7336182e-6eca-4f69-b7d8-1bf642a38200,Namespace:kube-system,Attempt:0,}" Mar 10 01:02:12.360007 systemd-networkd[1407]: cni0: Link UP Mar 10 01:02:12.360019 systemd-networkd[1407]: cni0: Gained carrier Mar 10 01:02:12.368144 systemd-networkd[1407]: cni0: Lost carrier Mar 10 01:02:12.499128 systemd-networkd[1407]: veth4c2aa180: Link UP Mar 10 01:02:12.500383 systemd-networkd[1407]: vethe8a68b69: Link UP Mar 10 01:02:12.520169 kernel: cni0: port 1(veth4c2aa180) entered blocking state Mar 10 01:02:12.520295 kernel: cni0: port 1(veth4c2aa180) entered disabled state Mar 10 01:02:12.527177 kernel: veth4c2aa180: entered allmulticast mode Mar 10 01:02:12.547106 kernel: veth4c2aa180: entered promiscuous mode Mar 10 01:02:12.595176 kernel: cni0: port 1(veth4c2aa180) entered blocking state Mar 10 01:02:12.599510 kernel: cni0: port 1(veth4c2aa180) entered forwarding state Mar 10 01:02:12.600418 kernel: cni0: port 1(veth4c2aa180) entered disabled state Mar 10 01:02:12.660379 kernel: cni0: port 2(vethe8a68b69) entered blocking state Mar 10 01:02:12.660504 kernel: cni0: port 2(vethe8a68b69) entered disabled state Mar 10 01:02:12.660546 kernel: vethe8a68b69: entered allmulticast mode Mar 10 01:02:12.682588 kernel: vethe8a68b69: entered promiscuous mode Mar 10 01:02:12.722373 kernel: cni0: port 1(veth4c2aa180) entered blocking state Mar 10 01:02:12.728071 kernel: cni0: port 1(veth4c2aa180) entered forwarding state Mar 10 01:02:12.723097 systemd-networkd[1407]: veth4c2aa180: Gained carrier Mar 10 01:02:12.727049 systemd-networkd[1407]: cni0: Gained carrier Mar 10 01:02:12.753213 containerd[1472]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Mar 10 01:02:12.753213 containerd[1472]: delegateAdd: netconf sent to delegate plugin: Mar 10 01:02:12.889965 kernel: cni0: port 2(vethe8a68b69) entered blocking state Mar 10 01:02:12.890215 kernel: cni0: port 2(vethe8a68b69) entered forwarding state Mar 10 01:02:12.888126 systemd-networkd[1407]: vethe8a68b69: Gained carrier Mar 10 01:02:12.920169 containerd[1472]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Mar 10 01:02:12.920169 containerd[1472]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Mar 10 01:02:12.920169 containerd[1472]: delegateAdd: netconf sent to delegate plugin: Mar 10 01:02:13.247147 containerd[1472]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-10T01:02:13.236486979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:02:13.247147 containerd[1472]: time="2026-03-10T01:02:13.236603125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:02:13.247147 containerd[1472]: time="2026-03-10T01:02:13.236648449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:02:13.256213 containerd[1472]: time="2026-03-10T01:02:13.253107906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:02:13.320495 containerd[1472]: time="2026-03-10T01:02:13.319958017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:02:13.320495 containerd[1472]: time="2026-03-10T01:02:13.320055599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:02:13.320495 containerd[1472]: time="2026-03-10T01:02:13.320091316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:02:13.320495 containerd[1472]: time="2026-03-10T01:02:13.320258717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:02:13.637451 systemd[1]: Started cri-containerd-b8e50bb22757d757c3780ac3fe92da1c42cad9b38945fcf4f206d160933d3bc4.scope - libcontainer container b8e50bb22757d757c3780ac3fe92da1c42cad9b38945fcf4f206d160933d3bc4. Mar 10 01:02:13.883976 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:02:13.944392 systemd-networkd[1407]: vethe8a68b69: Gained IPv6LL Mar 10 01:02:14.002491 systemd[1]: Started cri-containerd-98faac54fe477df09c58af2cc40ab3b1453870e900595b2906d2e71c086d3160.scope - libcontainer container 98faac54fe477df09c58af2cc40ab3b1453870e900595b2906d2e71c086d3160. Mar 10 01:02:14.153957 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:02:14.172471 containerd[1472]: time="2026-03-10T01:02:14.167649137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-54xdt,Uid:151e6af8-b338-4f43-bb3c-6c89d4dcc0f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8e50bb22757d757c3780ac3fe92da1c42cad9b38945fcf4f206d160933d3bc4\"" Mar 10 01:02:14.185068 kubelet[2630]: E0310 01:02:14.180095 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:14.260245 systemd-networkd[1407]: cni0: Gained IPv6LL Mar 10 01:02:14.424347 containerd[1472]: time="2026-03-10T01:02:14.418475932Z" level=info msg="CreateContainer within sandbox \"b8e50bb22757d757c3780ac3fe92da1c42cad9b38945fcf4f206d160933d3bc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:02:14.603448 containerd[1472]: time="2026-03-10T01:02:14.602126202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6bt6m,Uid:7336182e-6eca-4f69-b7d8-1bf642a38200,Namespace:kube-system,Attempt:0,} returns sandbox id \"98faac54fe477df09c58af2cc40ab3b1453870e900595b2906d2e71c086d3160\"" Mar 10 01:02:14.672140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238091959.mount: Deactivated successfully. Mar 10 01:02:14.680993 kubelet[2630]: E0310 01:02:14.678312 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:14.700503 systemd-networkd[1407]: veth4c2aa180: Gained IPv6LL Mar 10 01:02:14.701463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753823731.mount: Deactivated successfully. Mar 10 01:02:14.759640 containerd[1472]: time="2026-03-10T01:02:14.758153314Z" level=info msg="CreateContainer within sandbox \"98faac54fe477df09c58af2cc40ab3b1453870e900595b2906d2e71c086d3160\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:02:14.786153 containerd[1472]: time="2026-03-10T01:02:14.785973024Z" level=info msg="CreateContainer within sandbox \"b8e50bb22757d757c3780ac3fe92da1c42cad9b38945fcf4f206d160933d3bc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fccf47f9c7c8ec4b8d43573e4b187622a71997bbfb976babb3c9a529ed05f1d6\"" Mar 10 01:02:14.792297 containerd[1472]: time="2026-03-10T01:02:14.788021979Z" level=info msg="StartContainer for \"fccf47f9c7c8ec4b8d43573e4b187622a71997bbfb976babb3c9a529ed05f1d6\"" Mar 10 01:02:15.130602 containerd[1472]: time="2026-03-10T01:02:15.130468132Z" level=info msg="CreateContainer within sandbox \"98faac54fe477df09c58af2cc40ab3b1453870e900595b2906d2e71c086d3160\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2bc47ebc593e227c530835d27b8ac3f80ae2688801913a8eca7ad73547e2c5d\"" Mar 10 01:02:15.152110 containerd[1472]: time="2026-03-10T01:02:15.149340884Z" level=info msg="StartContainer for \"e2bc47ebc593e227c530835d27b8ac3f80ae2688801913a8eca7ad73547e2c5d\"" Mar 10 01:02:15.226518 systemd[1]: Started cri-containerd-fccf47f9c7c8ec4b8d43573e4b187622a71997bbfb976babb3c9a529ed05f1d6.scope - libcontainer container fccf47f9c7c8ec4b8d43573e4b187622a71997bbfb976babb3c9a529ed05f1d6. Mar 10 01:02:15.489509 systemd[1]: Started cri-containerd-e2bc47ebc593e227c530835d27b8ac3f80ae2688801913a8eca7ad73547e2c5d.scope - libcontainer container e2bc47ebc593e227c530835d27b8ac3f80ae2688801913a8eca7ad73547e2c5d. Mar 10 01:02:15.624360 containerd[1472]: time="2026-03-10T01:02:15.620036709Z" level=info msg="StartContainer for \"fccf47f9c7c8ec4b8d43573e4b187622a71997bbfb976babb3c9a529ed05f1d6\" returns successfully" Mar 10 01:02:15.640621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343595181.mount: Deactivated successfully. Mar 10 01:02:15.831389 containerd[1472]: time="2026-03-10T01:02:15.829518776Z" level=info msg="StartContainer for \"e2bc47ebc593e227c530835d27b8ac3f80ae2688801913a8eca7ad73547e2c5d\" returns successfully" Mar 10 01:02:16.193997 kubelet[2630]: E0310 01:02:16.193146 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:16.235585 kubelet[2630]: E0310 01:02:16.222529 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:16.499177 kubelet[2630]: I0310 01:02:16.496247 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-54xdt" podStartSLOduration=52.496210453 podStartE2EDuration="52.496210453s" podCreationTimestamp="2026-03-10 01:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:02:16.493175517 +0000 UTC m=+58.445815372" watchObservedRunningTime="2026-03-10 01:02:16.496210453 +0000 UTC m=+58.448850289" Mar 10 01:02:16.499177 kubelet[2630]: I0310 01:02:16.496493 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6bt6m" podStartSLOduration=52.496485606 podStartE2EDuration="52.496485606s" podCreationTimestamp="2026-03-10 01:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:02:16.389188462 +0000 UTC m=+58.341828298" watchObservedRunningTime="2026-03-10 01:02:16.496485606 +0000 UTC m=+58.449125482" Mar 10 01:02:17.247538 kubelet[2630]: E0310 01:02:17.246509 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:17.261127 kubelet[2630]: E0310 01:02:17.258361 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:18.407160 kubelet[2630]: E0310 01:02:18.402174 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:18.449250 kubelet[2630]: E0310 01:02:18.410249 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:19.368381 kubelet[2630]: E0310 01:02:19.367942 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:19.368381 kubelet[2630]: E0310 01:02:19.368243 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:32.995396 kubelet[2630]: E0310 01:02:32.972187 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:48.966256 kubelet[2630]: E0310 01:02:48.957291 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:50.865443 kubelet[2630]: E0310 01:02:50.865182 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:02:53.885458 kubelet[2630]: E0310 01:02:53.881374 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:03:19.869114 kubelet[2630]: E0310 01:03:19.866983 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:03:32.879401 kubelet[2630]: E0310 01:03:32.879247 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:03:43.864435 kubelet[2630]: E0310 01:03:43.864243 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:03:52.383518 kubelet[2630]: E0310 01:03:52.382366 2630 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.827s" Mar 10 01:03:52.402369 kubelet[2630]: E0310 01:03:52.399346 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:03:56.876127 kubelet[2630]: E0310 01:03:56.873620 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:04:14.868125 kubelet[2630]: E0310 01:04:14.867480 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:04:15.874163 kubelet[2630]: E0310 01:04:15.874120 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:04:31.499630 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:34566.service - OpenSSH per-connection server daemon (10.0.0.1:34566). Mar 10 01:04:32.038311 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 34566 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:04:32.046113 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:04:32.102112 systemd-logind[1455]: New session 6 of user core. Mar 10 01:04:32.170424 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 01:04:34.625234 sshd[4088]: pam_unix(sshd:session): session closed for user core Mar 10 01:04:34.634501 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Mar 10 01:04:34.639266 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:34566.service: Deactivated successfully. Mar 10 01:04:34.648247 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 01:04:34.648593 systemd[1]: session-6.scope: Consumed 1.646s CPU time. Mar 10 01:04:34.658485 systemd-logind[1455]: Removed session 6. Mar 10 01:04:39.727414 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:49028.service - OpenSSH per-connection server daemon (10.0.0.1:49028). Mar 10 01:04:39.889208 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 49028 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:04:39.899136 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:04:39.925968 systemd-logind[1455]: New session 7 of user core. Mar 10 01:04:39.948429 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 01:04:40.670239 sshd[4133]: pam_unix(sshd:session): session closed for user core Mar 10 01:04:40.691050 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:49028.service: Deactivated successfully. Mar 10 01:04:40.708400 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 01:04:40.733518 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Mar 10 01:04:40.747361 systemd-logind[1455]: Removed session 7. Mar 10 01:04:45.826405 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:56456.service - OpenSSH per-connection server daemon (10.0.0.1:56456). Mar 10 01:04:46.117318 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 56456 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:04:46.138604 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:04:46.198545 systemd-logind[1455]: New session 8 of user core. Mar 10 01:04:46.224643 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 01:04:46.906160 sshd[4183]: pam_unix(sshd:session): session closed for user core Mar 10 01:04:46.929340 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:56456.service: Deactivated successfully. Mar 10 01:04:46.951452 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 01:04:46.955370 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Mar 10 01:04:46.968358 systemd-logind[1455]: Removed session 8. Mar 10 01:04:47.872861 kubelet[2630]: E0310 01:04:47.871311 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:04:52.028528 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:56468.service - OpenSSH per-connection server daemon (10.0.0.1:56468). Mar 10 01:04:52.289331 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 56468 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:04:52.310596 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:04:52.399295 systemd-logind[1455]: New session 9 of user core. Mar 10 01:04:52.426525 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 01:04:53.557131 sshd[4219]: pam_unix(sshd:session): session closed for user core Mar 10 01:04:53.579380 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:56468.service: Deactivated successfully. Mar 10 01:04:53.585334 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 01:04:53.596295 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Mar 10 01:04:53.620581 systemd-logind[1455]: Removed session 9. Mar 10 01:04:56.869633 kubelet[2630]: E0310 01:04:56.869124 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:04:57.875603 kubelet[2630]: E0310 01:04:57.873426 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:04:58.623414 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:34322.service - OpenSSH per-connection server daemon (10.0.0.1:34322). Mar 10 01:04:58.888636 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 34322 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:04:58.895329 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:04:58.927398 systemd-logind[1455]: New session 10 of user core. Mar 10 01:04:58.957610 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 01:04:59.802313 sshd[4254]: pam_unix(sshd:session): session closed for user core Mar 10 01:04:59.814171 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Mar 10 01:04:59.818343 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:34322.service: Deactivated successfully. Mar 10 01:04:59.866647 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 01:04:59.877453 systemd-logind[1455]: Removed session 10. Mar 10 01:05:02.875221 kubelet[2630]: E0310 01:05:02.870648 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:03.863345 kubelet[2630]: E0310 01:05:03.862360 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:04.899044 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:35188.service - OpenSSH per-connection server daemon (10.0.0.1:35188). Mar 10 01:05:05.141607 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 35188 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:05.154502 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:05.186496 systemd-logind[1455]: New session 11 of user core. Mar 10 01:05:05.211433 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 01:05:06.299345 sshd[4290]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:06.310479 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Mar 10 01:05:06.316152 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:35188.service: Deactivated successfully. Mar 10 01:05:06.321541 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 01:05:06.332268 systemd-logind[1455]: Removed session 11. Mar 10 01:05:11.417546 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:35196.service - OpenSSH per-connection server daemon (10.0.0.1:35196). Mar 10 01:05:11.677550 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 35196 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:11.695143 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:11.733518 systemd-logind[1455]: New session 12 of user core. Mar 10 01:05:11.778399 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 01:05:12.516015 sshd[4340]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:12.525447 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:35196.service: Deactivated successfully. Mar 10 01:05:12.536270 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 01:05:12.539549 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Mar 10 01:05:12.547471 systemd-logind[1455]: Removed session 12. Mar 10 01:05:17.754259 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:47970.service - OpenSSH per-connection server daemon (10.0.0.1:47970). Mar 10 01:05:18.005218 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 47970 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:18.012132 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:18.129169 systemd-logind[1455]: New session 13 of user core. Mar 10 01:05:18.160324 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 01:05:18.708135 sshd[4382]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:18.721040 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Mar 10 01:05:18.726473 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:47970.service: Deactivated successfully. Mar 10 01:05:18.769357 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 01:05:18.776555 systemd-logind[1455]: Removed session 13. Mar 10 01:05:18.873095 kubelet[2630]: E0310 01:05:18.872406 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:23.775960 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:48652.service - OpenSSH per-connection server daemon (10.0.0.1:48652). Mar 10 01:05:23.867032 kubelet[2630]: E0310 01:05:23.866216 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:23.893481 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 48652 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:23.907042 sshd[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:23.957342 systemd-logind[1455]: New session 14 of user core. Mar 10 01:05:23.975116 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 01:05:24.525149 sshd[4421]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:24.552476 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:48652.service: Deactivated successfully. Mar 10 01:05:24.558463 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 01:05:24.569951 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Mar 10 01:05:24.578219 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:48664.service - OpenSSH per-connection server daemon (10.0.0.1:48664). Mar 10 01:05:24.585305 systemd-logind[1455]: Removed session 14. Mar 10 01:05:24.672027 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 48664 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:24.684622 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:24.712631 systemd-logind[1455]: New session 15 of user core. Mar 10 01:05:24.722371 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 01:05:25.298417 sshd[4436]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:25.329600 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:48664.service: Deactivated successfully. Mar 10 01:05:25.360636 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 01:05:25.377978 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Mar 10 01:05:25.421881 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:48668.service - OpenSSH per-connection server daemon (10.0.0.1:48668). Mar 10 01:05:25.442352 systemd-logind[1455]: Removed session 15. Mar 10 01:05:25.571348 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 48668 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:25.574981 sshd[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:25.596472 systemd-logind[1455]: New session 16 of user core. Mar 10 01:05:25.618212 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 01:05:26.162396 sshd[4451]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:26.189876 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:48668.service: Deactivated successfully. Mar 10 01:05:26.198181 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 01:05:26.200536 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Mar 10 01:05:26.204551 systemd-logind[1455]: Removed session 16. Mar 10 01:05:31.204487 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:48678.service - OpenSSH per-connection server daemon (10.0.0.1:48678). Mar 10 01:05:31.362571 sshd[4492]: Accepted publickey for core from 10.0.0.1 port 48678 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:31.376175 sshd[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:31.409482 systemd-logind[1455]: New session 17 of user core. Mar 10 01:05:31.424388 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 01:05:32.002282 sshd[4492]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:32.017222 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:48678.service: Deactivated successfully. Mar 10 01:05:32.025404 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 01:05:32.040380 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Mar 10 01:05:32.046543 systemd-logind[1455]: Removed session 17. Mar 10 01:05:37.029157 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:34130.service - OpenSSH per-connection server daemon (10.0.0.1:34130). Mar 10 01:05:37.175486 sshd[4526]: Accepted publickey for core from 10.0.0.1 port 34130 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:37.182567 sshd[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:37.212005 systemd-logind[1455]: New session 18 of user core. Mar 10 01:05:37.229914 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 01:05:38.020040 sshd[4526]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:38.045956 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:34130.service: Deactivated successfully. Mar 10 01:05:38.067982 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 01:05:38.074088 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Mar 10 01:05:38.090162 systemd-logind[1455]: Removed session 18. Mar 10 01:05:43.073071 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:36648.service - OpenSSH per-connection server daemon (10.0.0.1:36648). Mar 10 01:05:43.241370 sshd[4576]: Accepted publickey for core from 10.0.0.1 port 36648 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:43.248235 sshd[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:43.283397 systemd-logind[1455]: New session 19 of user core. Mar 10 01:05:43.303399 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 01:05:43.766479 sshd[4576]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:43.785512 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Mar 10 01:05:43.787534 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:36648.service: Deactivated successfully. Mar 10 01:05:43.798346 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 01:05:43.813120 systemd-logind[1455]: Removed session 19. Mar 10 01:05:48.834610 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:36662.service - OpenSSH per-connection server daemon (10.0.0.1:36662). Mar 10 01:05:48.993890 sshd[4611]: Accepted publickey for core from 10.0.0.1 port 36662 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:49.000249 sshd[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:49.039330 systemd-logind[1455]: New session 20 of user core. Mar 10 01:05:49.061050 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 01:05:49.576263 sshd[4611]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:49.618134 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Mar 10 01:05:49.627456 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:36662.service: Deactivated successfully. Mar 10 01:05:49.655033 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 01:05:49.677115 systemd-logind[1455]: Removed session 20. Mar 10 01:05:54.624340 systemd[1]: Started sshd@20-10.0.0.60:22-10.0.0.1:41752.service - OpenSSH per-connection server daemon (10.0.0.1:41752). Mar 10 01:05:54.875633 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 41752 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:05:54.889619 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:05:54.915332 systemd-logind[1455]: New session 21 of user core. Mar 10 01:05:54.929507 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 01:05:55.535519 sshd[4645]: pam_unix(sshd:session): session closed for user core Mar 10 01:05:55.573504 systemd[1]: sshd@20-10.0.0.60:22-10.0.0.1:41752.service: Deactivated successfully. Mar 10 01:05:55.591984 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 01:05:55.608267 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Mar 10 01:05:55.620390 systemd-logind[1455]: Removed session 21. Mar 10 01:05:58.885478 kubelet[2630]: E0310 01:05:58.881294 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:00.673345 systemd[1]: Started sshd@21-10.0.0.60:22-10.0.0.1:41754.service - OpenSSH per-connection server daemon (10.0.0.1:41754). Mar 10 01:06:00.977642 sshd[4680]: Accepted publickey for core from 10.0.0.1 port 41754 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:00.995536 sshd[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:01.070327 systemd-logind[1455]: New session 22 of user core. Mar 10 01:06:01.085275 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 01:06:01.705640 sshd[4680]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:01.730564 systemd[1]: sshd@21-10.0.0.60:22-10.0.0.1:41754.service: Deactivated successfully. Mar 10 01:06:01.749280 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 01:06:01.751340 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Mar 10 01:06:01.762489 systemd-logind[1455]: Removed session 22. Mar 10 01:06:03.865947 kubelet[2630]: E0310 01:06:03.863157 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:06.831056 systemd[1]: Started sshd@22-10.0.0.60:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). Mar 10 01:06:07.055881 sshd[4721]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:07.064980 sshd[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:07.112240 systemd-logind[1455]: New session 23 of user core. Mar 10 01:06:07.167276 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 01:06:07.746559 sshd[4721]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:07.761425 systemd[1]: sshd@22-10.0.0.60:22-10.0.0.1:45414.service: Deactivated successfully. Mar 10 01:06:07.773632 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 01:06:07.786519 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Mar 10 01:06:07.802397 systemd-logind[1455]: Removed session 23. Mar 10 01:06:08.870076 kubelet[2630]: E0310 01:06:08.870000 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:12.880582 kubelet[2630]: E0310 01:06:12.871626 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:12.895138 systemd[1]: Started sshd@23-10.0.0.60:22-10.0.0.1:42756.service - OpenSSH per-connection server daemon (10.0.0.1:42756). Mar 10 01:06:13.207361 sshd[4759]: Accepted publickey for core from 10.0.0.1 port 42756 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:13.223641 sshd[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:13.293197 systemd-logind[1455]: New session 24 of user core. Mar 10 01:06:13.318529 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 01:06:14.809332 sshd[4759]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:14.836589 systemd[1]: sshd@23-10.0.0.60:22-10.0.0.1:42756.service: Deactivated successfully. Mar 10 01:06:14.859950 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 01:06:14.877473 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Mar 10 01:06:14.882461 systemd-logind[1455]: Removed session 24. Mar 10 01:06:19.875940 kubelet[2630]: E0310 01:06:19.872243 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:19.898568 systemd[1]: Started sshd@24-10.0.0.60:22-10.0.0.1:42770.service - OpenSSH per-connection server daemon (10.0.0.1:42770). Mar 10 01:06:20.113223 sshd[4809]: Accepted publickey for core from 10.0.0.1 port 42770 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:20.120488 sshd[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:20.163413 systemd-logind[1455]: New session 25 of user core. Mar 10 01:06:20.174105 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 01:06:20.769143 sshd[4809]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:20.794299 systemd[1]: sshd@24-10.0.0.60:22-10.0.0.1:42770.service: Deactivated successfully. Mar 10 01:06:20.800444 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 01:06:20.803316 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Mar 10 01:06:20.806366 systemd-logind[1455]: Removed session 25. Mar 10 01:06:22.872214 kubelet[2630]: E0310 01:06:22.871281 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:26.382216 systemd[1]: Started sshd@25-10.0.0.60:22-10.0.0.1:35496.service - OpenSSH per-connection server daemon (10.0.0.1:35496). Mar 10 01:06:26.596598 sshd[4847]: Accepted publickey for core from 10.0.0.1 port 35496 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:26.596596 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:26.652961 systemd-logind[1455]: New session 26 of user core. Mar 10 01:06:26.685474 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 01:06:27.313115 sshd[4847]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:27.384232 systemd[1]: sshd@25-10.0.0.60:22-10.0.0.1:35496.service: Deactivated successfully. Mar 10 01:06:27.389127 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 01:06:27.397357 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. Mar 10 01:06:27.470144 systemd[1]: Started sshd@26-10.0.0.60:22-10.0.0.1:35502.service - OpenSSH per-connection server daemon (10.0.0.1:35502). Mar 10 01:06:27.484534 systemd-logind[1455]: Removed session 26. Mar 10 01:06:27.610531 sshd[4868]: Accepted publickey for core from 10.0.0.1 port 35502 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:27.672448 sshd[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:27.727449 systemd-logind[1455]: New session 27 of user core. Mar 10 01:06:27.785404 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 10 01:06:27.884156 kubelet[2630]: E0310 01:06:27.873362 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:29.688046 sshd[4868]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:29.755186 systemd[1]: sshd@26-10.0.0.60:22-10.0.0.1:35502.service: Deactivated successfully. Mar 10 01:06:29.759961 systemd[1]: session-27.scope: Deactivated successfully. Mar 10 01:06:29.761180 systemd[1]: session-27.scope: Consumed 1.056s CPU time. Mar 10 01:06:29.776337 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. Mar 10 01:06:29.828062 systemd[1]: Started sshd@27-10.0.0.60:22-10.0.0.1:35504.service - OpenSSH per-connection server daemon (10.0.0.1:35504). Mar 10 01:06:29.830371 systemd-logind[1455]: Removed session 27. Mar 10 01:06:30.038389 sshd[4881]: Accepted publickey for core from 10.0.0.1 port 35504 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:30.068611 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:30.108433 systemd-logind[1455]: New session 28 of user core. Mar 10 01:06:30.180156 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 10 01:06:32.908583 sshd[4881]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:32.946309 systemd[1]: sshd@27-10.0.0.60:22-10.0.0.1:35504.service: Deactivated successfully. Mar 10 01:06:32.953644 systemd[1]: session-28.scope: Deactivated successfully. Mar 10 01:06:32.959207 systemd[1]: session-28.scope: Consumed 1.795s CPU time. Mar 10 01:06:32.964211 systemd-logind[1455]: Session 28 logged out. Waiting for processes to exit. Mar 10 01:06:32.995139 systemd[1]: Started sshd@28-10.0.0.60:22-10.0.0.1:45950.service - OpenSSH per-connection server daemon (10.0.0.1:45950). Mar 10 01:06:33.000629 systemd-logind[1455]: Removed session 28. Mar 10 01:06:33.155366 sshd[4925]: Accepted publickey for core from 10.0.0.1 port 45950 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:33.162187 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:33.190967 systemd-logind[1455]: New session 29 of user core. Mar 10 01:06:33.203103 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 10 01:06:34.477525 sshd[4925]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:34.510440 systemd[1]: sshd@28-10.0.0.60:22-10.0.0.1:45950.service: Deactivated successfully. Mar 10 01:06:34.520222 systemd[1]: session-29.scope: Deactivated successfully. Mar 10 01:06:34.526217 systemd-logind[1455]: Session 29 logged out. Waiting for processes to exit. Mar 10 01:06:34.580080 systemd[1]: Started sshd@29-10.0.0.60:22-10.0.0.1:45954.service - OpenSSH per-connection server daemon (10.0.0.1:45954). Mar 10 01:06:34.588256 systemd-logind[1455]: Removed session 29. Mar 10 01:06:34.720164 sshd[4940]: Accepted publickey for core from 10.0.0.1 port 45954 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:34.724444 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:34.796246 systemd-logind[1455]: New session 30 of user core. Mar 10 01:06:34.815208 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 10 01:06:35.468361 sshd[4940]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:35.493177 systemd[1]: sshd@29-10.0.0.60:22-10.0.0.1:45954.service: Deactivated successfully. Mar 10 01:06:35.534066 systemd[1]: session-30.scope: Deactivated successfully. Mar 10 01:06:35.548646 systemd-logind[1455]: Session 30 logged out. Waiting for processes to exit. Mar 10 01:06:35.559646 systemd-logind[1455]: Removed session 30. Mar 10 01:06:40.537535 systemd[1]: Started sshd@30-10.0.0.60:22-10.0.0.1:45962.service - OpenSSH per-connection server daemon (10.0.0.1:45962). Mar 10 01:06:40.787379 sshd[4976]: Accepted publickey for core from 10.0.0.1 port 45962 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:40.811548 sshd[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:40.875034 systemd-logind[1455]: New session 31 of user core. Mar 10 01:06:40.885110 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 10 01:06:41.807182 sshd[4976]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:41.819306 systemd[1]: sshd@30-10.0.0.60:22-10.0.0.1:45962.service: Deactivated successfully. Mar 10 01:06:41.836502 systemd[1]: session-31.scope: Deactivated successfully. Mar 10 01:06:41.840080 systemd-logind[1455]: Session 31 logged out. Waiting for processes to exit. Mar 10 01:06:41.849080 systemd-logind[1455]: Removed session 31. Mar 10 01:06:46.875084 systemd[1]: Started sshd@31-10.0.0.60:22-10.0.0.1:43888.service - OpenSSH per-connection server daemon (10.0.0.1:43888). Mar 10 01:06:47.219487 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 43888 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:47.229213 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:47.277969 systemd-logind[1455]: New session 32 of user core. Mar 10 01:06:47.291310 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 10 01:06:48.035626 sshd[5024]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:48.056209 systemd[1]: sshd@31-10.0.0.60:22-10.0.0.1:43888.service: Deactivated successfully. Mar 10 01:06:48.081605 systemd[1]: session-32.scope: Deactivated successfully. Mar 10 01:06:48.095158 systemd-logind[1455]: Session 32 logged out. Waiting for processes to exit. Mar 10 01:06:48.107542 systemd-logind[1455]: Removed session 32. Mar 10 01:06:53.115282 systemd[1]: Started sshd@32-10.0.0.60:22-10.0.0.1:33482.service - OpenSSH per-connection server daemon (10.0.0.1:33482). Mar 10 01:06:53.304176 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 33482 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:53.310477 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:53.383476 systemd-logind[1455]: New session 33 of user core. Mar 10 01:06:53.444065 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 10 01:06:54.153486 sshd[5064]: pam_unix(sshd:session): session closed for user core Mar 10 01:06:54.193600 systemd[1]: sshd@32-10.0.0.60:22-10.0.0.1:33482.service: Deactivated successfully. Mar 10 01:06:54.212497 systemd[1]: session-33.scope: Deactivated successfully. Mar 10 01:06:54.218985 systemd-logind[1455]: Session 33 logged out. Waiting for processes to exit. Mar 10 01:06:54.246563 systemd-logind[1455]: Removed session 33. Mar 10 01:06:59.282562 systemd[1]: Started sshd@33-10.0.0.60:22-10.0.0.1:33492.service - OpenSSH per-connection server daemon (10.0.0.1:33492). Mar 10 01:06:59.520268 sshd[5099]: Accepted publickey for core from 10.0.0.1 port 33492 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:06:59.561433 sshd[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:06:59.608394 systemd-logind[1455]: New session 34 of user core. Mar 10 01:06:59.647383 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 10 01:07:00.447254 sshd[5099]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:00.468844 systemd[1]: sshd@33-10.0.0.60:22-10.0.0.1:33492.service: Deactivated successfully. Mar 10 01:07:00.482063 systemd[1]: session-34.scope: Deactivated successfully. Mar 10 01:07:00.488588 systemd-logind[1455]: Session 34 logged out. Waiting for processes to exit. Mar 10 01:07:00.504167 systemd-logind[1455]: Removed session 34. Mar 10 01:07:00.889489 kubelet[2630]: E0310 01:07:00.880447 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:05.601263 systemd[1]: Started sshd@34-10.0.0.60:22-10.0.0.1:40400.service - OpenSSH per-connection server daemon (10.0.0.1:40400). Mar 10 01:07:06.092581 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 40400 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:07:06.109274 sshd[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:06.190276 systemd-logind[1455]: New session 35 of user core. Mar 10 01:07:06.217518 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 10 01:07:06.950079 sshd[5134]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:06.984257 systemd[1]: sshd@34-10.0.0.60:22-10.0.0.1:40400.service: Deactivated successfully. Mar 10 01:07:07.003336 systemd[1]: session-35.scope: Deactivated successfully. Mar 10 01:07:07.024404 systemd-logind[1455]: Session 35 logged out. Waiting for processes to exit. Mar 10 01:07:07.069981 systemd-logind[1455]: Removed session 35. Mar 10 01:07:08.866575 kubelet[2630]: E0310 01:07:08.865332 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:12.008272 systemd[1]: Started sshd@35-10.0.0.60:22-10.0.0.1:40410.service - OpenSSH per-connection server daemon (10.0.0.1:40410). Mar 10 01:07:12.166284 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 40410 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:07:12.177550 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:12.236025 systemd-logind[1455]: New session 36 of user core. Mar 10 01:07:12.259250 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 10 01:07:12.792973 sshd[5170]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:12.807996 systemd-logind[1455]: Session 36 logged out. Waiting for processes to exit. Mar 10 01:07:12.808511 systemd[1]: sshd@35-10.0.0.60:22-10.0.0.1:40410.service: Deactivated successfully. Mar 10 01:07:12.816645 systemd[1]: session-36.scope: Deactivated successfully. Mar 10 01:07:12.820070 systemd-logind[1455]: Removed session 36. Mar 10 01:07:17.913285 systemd[1]: Started sshd@36-10.0.0.60:22-10.0.0.1:33354.service - OpenSSH per-connection server daemon (10.0.0.1:33354). Mar 10 01:07:18.144190 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 33354 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:07:18.158630 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:18.204574 systemd-logind[1455]: New session 37 of user core. Mar 10 01:07:18.239215 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 10 01:07:18.926496 sshd[5220]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:18.946355 systemd[1]: sshd@36-10.0.0.60:22-10.0.0.1:33354.service: Deactivated successfully. Mar 10 01:07:18.967530 systemd[1]: session-37.scope: Deactivated successfully. Mar 10 01:07:18.980323 systemd-logind[1455]: Session 37 logged out. Waiting for processes to exit. Mar 10 01:07:18.995190 systemd-logind[1455]: Removed session 37. Mar 10 01:07:23.993538 systemd[1]: Started sshd@37-10.0.0.60:22-10.0.0.1:60078.service - OpenSSH per-connection server daemon (10.0.0.1:60078). Mar 10 01:07:24.135798 sshd[5264]: Accepted publickey for core from 10.0.0.1 port 60078 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:07:24.146209 sshd[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:24.177463 systemd-logind[1455]: New session 38 of user core. Mar 10 01:07:24.189299 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 10 01:07:24.654573 sshd[5264]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:24.671045 systemd[1]: sshd@37-10.0.0.60:22-10.0.0.1:60078.service: Deactivated successfully. Mar 10 01:07:24.678141 systemd[1]: session-38.scope: Deactivated successfully. Mar 10 01:07:24.681501 systemd-logind[1455]: Session 38 logged out. Waiting for processes to exit. Mar 10 01:07:24.697363 systemd-logind[1455]: Removed session 38. Mar 10 01:07:29.697872 systemd[1]: Started sshd@38-10.0.0.60:22-10.0.0.1:60086.service - OpenSSH per-connection server daemon (10.0.0.1:60086). Mar 10 01:07:29.885890 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 60086 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:07:29.898416 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:29.934166 systemd-logind[1455]: New session 39 of user core. Mar 10 01:07:29.964028 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 10 01:07:30.486470 sshd[5298]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:30.507312 systemd[1]: sshd@38-10.0.0.60:22-10.0.0.1:60086.service: Deactivated successfully. Mar 10 01:07:30.518557 systemd[1]: session-39.scope: Deactivated successfully. Mar 10 01:07:30.525430 systemd-logind[1455]: Session 39 logged out. Waiting for processes to exit. Mar 10 01:07:30.532645 systemd-logind[1455]: Removed session 39. Mar 10 01:07:30.873582 kubelet[2630]: E0310 01:07:30.872052 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:33.917301 kubelet[2630]: E0310 01:07:33.870564 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:35.587863 systemd[1]: Started sshd@39-10.0.0.60:22-10.0.0.1:60934.service - OpenSSH per-connection server daemon (10.0.0.1:60934). Mar 10 01:07:35.972406 kubelet[2630]: E0310 01:07:35.972249 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:36.192864 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 60934 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:07:36.291320 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:36.360818 systemd-logind[1455]: New session 40 of user core. Mar 10 01:07:36.394304 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 10 01:07:36.914313 sshd[5334]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:36.928899 systemd-logind[1455]: Session 40 logged out. Waiting for processes to exit. Mar 10 01:07:36.932348 systemd[1]: sshd@39-10.0.0.60:22-10.0.0.1:60934.service: Deactivated successfully. Mar 10 01:07:36.948218 systemd[1]: session-40.scope: Deactivated successfully. Mar 10 01:07:36.954608 systemd-logind[1455]: Removed session 40. Mar 10 01:07:41.984534 systemd[1]: Started sshd@40-10.0.0.60:22-10.0.0.1:60942.service - OpenSSH per-connection server daemon (10.0.0.1:60942). Mar 10 01:07:42.122252 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 60942 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:07:42.157330 sshd[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:42.199093 systemd-logind[1455]: New session 41 of user core. Mar 10 01:07:42.219057 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 10 01:07:42.723160 sshd[5371]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:42.769059 systemd[1]: sshd@40-10.0.0.60:22-10.0.0.1:60942.service: Deactivated successfully. Mar 10 01:07:42.783288 systemd[1]: session-41.scope: Deactivated successfully. Mar 10 01:07:42.791302 systemd-logind[1455]: Session 41 logged out. Waiting for processes to exit. Mar 10 01:07:42.798542 systemd-logind[1455]: Removed session 41. Mar 10 01:07:42.867528 kubelet[2630]: E0310 01:07:42.863817 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:47.815878 systemd[1]: Started sshd@41-10.0.0.60:22-10.0.0.1:36730.service - OpenSSH per-connection server daemon (10.0.0.1:36730). Mar 10 01:07:47.973275 sshd[5405]: Accepted publickey for core from 10.0.0.1 port 36730 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:07:47.979255 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:48.045283 systemd-logind[1455]: New session 42 of user core. Mar 10 01:07:48.074197 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 10 01:07:48.722235 sshd[5405]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:48.732351 systemd-logind[1455]: Session 42 logged out. Waiting for processes to exit. Mar 10 01:07:48.738641 systemd[1]: sshd@41-10.0.0.60:22-10.0.0.1:36730.service: Deactivated successfully. Mar 10 01:07:48.776256 systemd[1]: session-42.scope: Deactivated successfully. Mar 10 01:07:48.780484 systemd-logind[1455]: Removed session 42. Mar 10 01:07:53.785469 systemd[1]: Started sshd@42-10.0.0.60:22-10.0.0.1:53204.service - OpenSSH per-connection server daemon (10.0.0.1:53204). Mar 10 01:07:53.869894 kubelet[2630]: E0310 01:07:53.866309 2630 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:53.894157 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 53204 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:07:53.897327 sshd[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:07:53.925279 systemd-logind[1455]: New session 43 of user core. Mar 10 01:07:53.967426 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 10 01:07:54.474162 sshd[5447]: pam_unix(sshd:session): session closed for user core Mar 10 01:07:54.490093 systemd[1]: sshd@42-10.0.0.60:22-10.0.0.1:53204.service: Deactivated successfully. Mar 10 01:07:54.502630 systemd[1]: session-43.scope: Deactivated successfully. Mar 10 01:07:54.509537 systemd-logind[1455]: Session 43 logged out. Waiting for processes to exit. Mar 10 01:07:54.514576 systemd-logind[1455]: Removed session 43.