Mar 7 01:46:40.457840 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:46:40.457876 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:46:40.457894 kernel: BIOS-provided physical RAM map: Mar 7 01:46:40.457905 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:46:40.457914 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 7 01:46:40.457922 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 7 01:46:40.457932 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 7 01:46:40.457941 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 7 01:46:40.457950 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 7 01:46:40.457958 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 7 01:46:40.457972 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 7 01:46:40.457980 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 7 01:46:40.457990 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 7 01:46:40.458000 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 7 01:46:40.458012 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 7 01:46:40.458024 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 7 01:46:40.458037 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 7 01:46:40.458046 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 7 01:46:40.458058 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 7 01:46:40.458067 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:46:40.458075 kernel: NX (Execute Disable) protection: active Mar 7 01:46:40.458086 kernel: APIC: Static calls initialized Mar 7 01:46:40.458096 kernel: efi: EFI v2.7 by EDK II Mar 7 01:46:40.458105 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 7 01:46:40.458115 kernel: SMBIOS 2.8 present. Mar 7 01:46:40.458127 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 7 01:46:40.458135 kernel: Hypervisor detected: KVM Mar 7 01:46:40.458149 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:46:40.458159 kernel: kvm-clock: using sched offset of 17586461996 cycles Mar 7 01:46:40.458170 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:46:40.458180 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:46:40.459595 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:46:40.459611 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:46:40.459622 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 7 01:46:40.459632 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:46:40.459642 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:46:40.459660 kernel: Using GB pages for direct mapping Mar 7 01:46:40.459669 kernel: Secure boot disabled Mar 7 01:46:40.459678 kernel: ACPI: Early table checksum verification disabled Mar 7 01:46:40.459687 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 7 01:46:40.459703 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 7 01:46:40.459714 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:40.459724 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:40.459739 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 7 01:46:40.459750 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:40.459760 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:40.459771 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:40.459781 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:46:40.459792 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 7 01:46:40.459802 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 7 01:46:40.459816 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 7 01:46:40.459827 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 7 01:46:40.459838 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 7 01:46:40.459850 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 7 01:46:40.459862 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 7 01:46:40.459871 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 7 01:46:40.459879 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 7 01:46:40.459891 kernel: No NUMA configuration found Mar 7 01:46:40.459903 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 7 01:46:40.459917 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 7 01:46:40.459928 kernel: Zone ranges: Mar 7 01:46:40.459939 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:46:40.459950 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 7 01:46:40.459960 kernel: Normal empty Mar 7 01:46:40.459970 kernel: Movable zone start for each node Mar 7 01:46:40.459981 kernel: Early memory node ranges Mar 7 01:46:40.459991 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:46:40.460001 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 7 01:46:40.460015 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 7 01:46:40.460026 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 7 01:46:40.460037 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 7 01:46:40.460047 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 7 01:46:40.460057 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 7 01:46:40.460068 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:46:40.460078 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:46:40.460088 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 7 01:46:40.460098 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:46:40.460109 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 7 01:46:40.460123 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 7 01:46:40.460133 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 7 01:46:40.460144 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:46:40.460154 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:46:40.460164 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:46:40.460174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:46:40.460184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:46:40.460338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:46:40.460359 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:46:40.460369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:46:40.460380 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:46:40.460392 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:46:40.460402 kernel: TSC deadline timer available Mar 7 01:46:40.460411 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:46:40.460422 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:46:40.460433 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:46:40.460444 kernel: kvm-guest: setup PV sched yield Mar 7 01:46:40.460455 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 7 01:46:40.460471 kernel: Booting paravirtualized kernel on KVM Mar 7 01:46:40.460481 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:46:40.460492 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:46:40.460503 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:46:40.460513 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:46:40.460525 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:46:40.460535 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:46:40.460546 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:46:40.460558 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:46:40.460573 kernel: random: crng init done Mar 7 01:46:40.460583 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:46:40.460594 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:46:40.460604 kernel: Fallback order for Node 0: 0 Mar 7 01:46:40.460615 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 7 01:46:40.460626 kernel: Policy zone: DMA32 Mar 7 01:46:40.460637 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:46:40.460648 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 7 01:46:40.460663 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:46:40.460674 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:46:40.460685 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:46:40.460696 kernel: Dynamic Preempt: voluntary Mar 7 01:46:40.460707 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:46:40.460730 kernel: rcu: RCU event tracing is enabled. Mar 7 01:46:40.460745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:46:40.460756 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:46:40.460769 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:46:40.460780 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:46:40.460791 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:46:40.460802 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:46:40.460817 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:46:40.460828 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:46:40.460840 kernel: Console: colour dummy device 80x25 Mar 7 01:46:40.460853 kernel: printk: console [ttyS0] enabled Mar 7 01:46:40.460869 kernel: ACPI: Core revision 20230628 Mar 7 01:46:40.460879 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:46:40.460889 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:46:40.460901 kernel: x2apic enabled Mar 7 01:46:40.460914 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:46:40.460924 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:46:40.460935 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:46:40.460946 kernel: kvm-guest: setup PV IPIs Mar 7 01:46:40.460957 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:46:40.460973 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:46:40.460985 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:46:40.460997 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:46:40.461009 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:46:40.461019 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:46:40.461029 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:46:40.461041 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:46:40.461052 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:46:40.461063 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:46:40.461080 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:46:40.461092 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:46:40.461104 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:46:40.461118 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:46:40.461128 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:46:40.461137 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:46:40.461150 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:46:40.461162 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:46:40.461172 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:46:40.461241 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:46:40.461318 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:46:40.461332 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:46:40.461344 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:46:40.461355 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:46:40.461366 kernel: landlock: Up and running. Mar 7 01:46:40.461376 kernel: SELinux: Initializing. Mar 7 01:46:40.461388 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:46:40.461399 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:46:40.461416 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:46:40.461427 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:46:40.461438 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:46:40.461450 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:46:40.461461 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:46:40.461472 kernel: signal: max sigframe size: 1776 Mar 7 01:46:40.461484 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:46:40.461495 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:46:40.461510 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:46:40.461522 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:46:40.461533 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:46:40.461543 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:46:40.461555 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:46:40.461566 kernel: smpboot: Max logical packages: 1 Mar 7 01:46:40.461577 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:46:40.461589 kernel: devtmpfs: initialized Mar 7 01:46:40.461600 kernel: x86/mm: Memory block size: 128MB Mar 7 01:46:40.461611 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 7 01:46:40.461627 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 7 01:46:40.461639 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 7 01:46:40.461650 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 7 01:46:40.461661 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 7 01:46:40.461673 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:46:40.461685 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:46:40.461696 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:46:40.461707 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:46:40.461723 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:46:40.461735 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:46:40.461745 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:46:40.461754 kernel: audit: type=2000 audit(1772847993.203:1): state=initialized audit_enabled=0 res=1 Mar 7 01:46:40.461765 kernel: cpuidle: using governor menu Mar 7 01:46:40.461775 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:46:40.461787 kernel: dca service started, version 1.12.1 Mar 7 01:46:40.461798 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:46:40.461810 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:46:40.461826 kernel: PCI: Using configuration type 1 for base access Mar 7 01:46:40.461838 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:46:40.461851 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:46:40.461862 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:46:40.461872 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:46:40.461884 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:46:40.461896 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:46:40.461907 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:46:40.461916 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:46:40.461932 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:46:40.461943 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:46:40.461954 kernel: ACPI: Interpreter enabled Mar 7 01:46:40.461965 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:46:40.461976 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:46:40.461987 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:46:40.461998 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:46:40.462009 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:46:40.462020 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:46:40.462840 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:46:40.463103 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:46:40.463614 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:46:40.463636 kernel: PCI host bridge to bus 0000:00 Mar 7 01:46:40.464027 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:46:40.466451 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:46:40.466825 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:46:40.477046 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:46:40.477496 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:46:40.477692 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 7 01:46:40.477879 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:46:40.478471 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:46:40.478793 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:46:40.480826 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 7 01:46:40.481039 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 7 01:46:40.481325 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 7 01:46:40.481491 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 7 01:46:40.481678 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:46:40.482238 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:46:40.482517 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 7 01:46:40.482729 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 7 01:46:40.482926 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 7 01:46:40.484358 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:46:40.484568 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 7 01:46:40.484765 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 7 01:46:40.484958 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 7 01:46:40.485450 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:46:40.485653 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 7 01:46:40.485853 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 7 01:46:40.486042 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 7 01:46:40.486364 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 7 01:46:40.486759 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:46:40.486958 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:46:40.487886 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:46:40.488081 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 7 01:46:40.488410 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 7 01:46:40.496697 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:46:40.496904 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 7 01:46:40.496923 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:46:40.496935 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:46:40.496948 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:46:40.496985 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:46:40.496998 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:46:40.497009 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:46:40.497019 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:46:40.497032 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:46:40.497042 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:46:40.497053 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:46:40.497066 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:46:40.497081 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:46:40.497092 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:46:40.497105 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:46:40.497118 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:46:40.497128 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:46:40.497138 kernel: iommu: Default domain type: Translated Mar 7 01:46:40.497151 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:46:40.497160 kernel: efivars: Registered efivars operations Mar 7 01:46:40.497172 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:46:40.497185 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:46:40.497326 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 7 01:46:40.497340 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 7 01:46:40.497353 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 7 01:46:40.497365 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 7 01:46:40.497573 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:46:40.497768 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:46:40.497960 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:46:40.497980 kernel: vgaarb: loaded Mar 7 01:46:40.497991 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:46:40.498010 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:46:40.498022 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:46:40.498032 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:46:40.498044 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:46:40.498055 kernel: pnp: PnP ACPI init Mar 7 01:46:40.498589 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:46:40.498612 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:46:40.498626 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:46:40.498644 kernel: NET: Registered PF_INET protocol family Mar 7 01:46:40.498657 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:46:40.498670 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:46:40.498683 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:46:40.498696 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:46:40.498709 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:46:40.498722 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:46:40.498734 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:46:40.498746 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:46:40.498763 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:46:40.498775 kernel: NET: Registered PF_XDP protocol family Mar 7 01:46:40.498983 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 7 01:46:40.499178 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 7 01:46:40.513478 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:46:40.513674 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:46:40.513846 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:46:40.514017 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:46:40.514250 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:46:40.514491 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 7 01:46:40.514508 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:46:40.514521 kernel: Initialise system trusted keyrings Mar 7 01:46:40.514534 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:46:40.514544 kernel: Key type asymmetric registered Mar 7 01:46:40.514554 kernel: Asymmetric key parser 'x509' registered Mar 7 01:46:40.514564 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:46:40.514575 kernel: io scheduler mq-deadline registered Mar 7 01:46:40.514594 kernel: io scheduler kyber registered Mar 7 01:46:40.514605 kernel: io scheduler bfq registered Mar 7 01:46:40.514616 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:46:40.514628 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:46:40.514639 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:46:40.514650 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:46:40.514661 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:46:40.514672 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:46:40.514683 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:46:40.514697 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:46:40.514709 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:46:40.514994 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:46:40.515014 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:46:40.515172 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:46:40.523752 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:46:38 UTC (1772847998) Mar 7 01:46:40.523935 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:46:40.523953 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:46:40.523974 kernel: efifb: probing for efifb Mar 7 01:46:40.523988 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 7 01:46:40.523998 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 7 01:46:40.524008 kernel: efifb: scrolling: redraw Mar 7 01:46:40.524021 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 7 01:46:40.524033 kernel: Console: switching to colour frame buffer device 100x37 Mar 7 01:46:40.524043 kernel: fb0: EFI VGA frame buffer device Mar 7 01:46:40.524054 kernel: pstore: Using crash dump compression: deflate Mar 7 01:46:40.524067 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:46:40.524082 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:46:40.524095 kernel: Segment Routing with IPv6 Mar 7 01:46:40.524107 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:46:40.524116 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:46:40.524128 kernel: Key type dns_resolver registered Mar 7 01:46:40.524141 kernel: IPI shorthand broadcast: enabled Mar 7 01:46:40.524181 kernel: sched_clock: Marking stable (4097239576, 1455410812)->(7051248168, -1498597780) Mar 7 01:46:40.524243 kernel: registered taskstats version 1 Mar 7 01:46:40.524316 kernel: Loading compiled-in X.509 certificates Mar 7 01:46:40.524335 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:46:40.524346 kernel: Key type .fscrypt registered Mar 7 01:46:40.524358 kernel: Key type fscrypt-provisioning registered Mar 7 01:46:40.524369 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:46:40.524381 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:46:40.524392 kernel: ima: No architecture policies found Mar 7 01:46:40.524403 kernel: clk: Disabling unused clocks Mar 7 01:46:40.524415 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:46:40.524431 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:46:40.524443 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:46:40.524454 kernel: Run /init as init process Mar 7 01:46:40.524465 kernel: with arguments: Mar 7 01:46:40.524477 kernel: /init Mar 7 01:46:40.524488 kernel: with environment: Mar 7 01:46:40.524500 kernel: HOME=/ Mar 7 01:46:40.524511 kernel: TERM=linux Mar 7 01:46:40.524525 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:46:40.524543 systemd[1]: Detected virtualization kvm. Mar 7 01:46:40.524555 systemd[1]: Detected architecture x86-64. Mar 7 01:46:40.524567 systemd[1]: Running in initrd. Mar 7 01:46:40.524578 systemd[1]: No hostname configured, using default hostname. Mar 7 01:46:40.524590 systemd[1]: Hostname set to . Mar 7 01:46:40.524602 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:46:40.524613 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:46:40.524629 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:46:40.524641 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:46:40.524654 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:46:40.524666 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:46:40.524678 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:46:40.524697 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:46:40.524711 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:46:40.524723 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:46:40.524735 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:46:40.524747 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:46:40.524759 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:46:40.524770 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:46:40.524786 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:46:40.524797 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:46:40.524808 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:46:40.524819 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:46:40.524831 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:46:40.524844 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:46:40.524858 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:46:40.524869 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:46:40.524879 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:46:40.524896 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:46:40.524907 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:46:40.524919 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:46:40.524930 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:46:40.524941 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:46:40.524953 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:46:40.524964 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:46:40.524975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:46:40.524990 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:46:40.525032 systemd-journald[194]: Collecting audit messages is disabled. Mar 7 01:46:40.525059 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:46:40.525073 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:46:40.525090 systemd-journald[194]: Journal started Mar 7 01:46:40.525116 systemd-journald[194]: Runtime Journal (/run/log/journal/95c0a28957ff4d9fbbfd816fb880f527) is 6.0M, max 48.3M, 42.2M free. Mar 7 01:46:40.555724 systemd-modules-load[195]: Inserted module 'overlay' Mar 7 01:46:40.594035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:46:40.622027 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:46:40.639710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:46:40.650829 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:46:40.778453 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:46:40.781372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:46:40.817148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:46:40.849624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:46:40.892702 kernel: Bridge firewalling registered Mar 7 01:46:40.896521 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 7 01:46:40.912113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:46:40.964874 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:46:40.992766 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:46:41.031131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:46:41.063693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:46:41.095408 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:46:41.111891 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:46:41.141448 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:46:41.177985 dracut-cmdline[231]: dracut-dracut-053 Mar 7 01:46:41.190787 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:46:41.292154 systemd-resolved[235]: Positive Trust Anchors: Mar 7 01:46:41.293459 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:46:41.294176 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:46:41.298566 systemd-resolved[235]: Defaulting to hostname 'linux'. Mar 7 01:46:41.302240 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:46:41.414760 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:46:41.605754 kernel: SCSI subsystem initialized Mar 7 01:46:41.629004 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:46:41.661751 kernel: iscsi: registered transport (tcp) Mar 7 01:46:41.718369 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:46:41.718493 kernel: QLogic iSCSI HBA Driver Mar 7 01:46:41.996067 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:46:42.042373 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:46:42.163453 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:46:42.163529 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:46:42.170838 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:46:42.364612 kernel: raid6: avx2x4 gen() 16302 MB/s Mar 7 01:46:42.406434 kernel: raid6: avx2x2 gen() 11860 MB/s Mar 7 01:46:42.414648 kernel: raid6: avx2x1 gen() 1924 MB/s Mar 7 01:46:42.414702 kernel: raid6: using algorithm avx2x4 gen() 16302 MB/s Mar 7 01:46:42.443731 kernel: raid6: .... xor() 2465 MB/s, rmw enabled Mar 7 01:46:42.443813 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:46:42.503412 kernel: xor: automatically using best checksumming function avx Mar 7 01:46:43.685762 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:46:43.794333 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:46:43.823934 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:46:43.857945 systemd-udevd[418]: Using default interface naming scheme 'v255'. Mar 7 01:46:43.866649 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:46:43.870587 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:46:43.990639 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Mar 7 01:46:44.115697 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:46:44.154814 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:46:44.345824 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:46:44.387102 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:46:44.459151 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:46:44.483682 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:46:44.513514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:46:44.551530 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:46:44.586655 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:46:44.607810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:46:44.608011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:46:44.613446 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:46:44.617975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:46:44.618348 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:46:44.627499 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:46:44.669374 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:46:44.684062 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:46:44.684130 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:46:44.671689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:46:44.731443 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:46:44.731477 kernel: GPT:9289727 != 19775487 Mar 7 01:46:44.731507 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:46:44.731522 kernel: GPT:9289727 != 19775487 Mar 7 01:46:44.731536 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:46:44.684994 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:46:44.756936 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:44.762765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:46:44.763073 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:46:44.874899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:46:44.958430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:46:45.003380 kernel: libata version 3.00 loaded. Mar 7 01:46:45.009576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:46:45.078569 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:46:45.129388 kernel: AES CTR mode by8 optimization enabled Mar 7 01:46:45.163900 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:46:45.222607 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:46:45.241399 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:46:45.248365 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:46:45.248429 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (463) Mar 7 01:46:45.262465 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:46:45.331148 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:46:45.332109 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:46:45.352418 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Mar 7 01:46:45.358394 kernel: scsi host0: ahci Mar 7 01:46:45.370414 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:46:45.386577 kernel: scsi host1: ahci Mar 7 01:46:45.386973 kernel: scsi host2: ahci Mar 7 01:46:45.399702 kernel: scsi host3: ahci Mar 7 01:46:45.408542 kernel: scsi host4: ahci Mar 7 01:46:45.425113 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:46:45.514834 kernel: scsi host5: ahci Mar 7 01:46:45.518175 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Mar 7 01:46:45.518251 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Mar 7 01:46:45.518325 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Mar 7 01:46:45.518372 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Mar 7 01:46:45.518387 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Mar 7 01:46:45.518400 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Mar 7 01:46:45.535339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:46:45.583885 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:46:45.636789 disk-uuid[569]: Primary Header is updated. Mar 7 01:46:45.636789 disk-uuid[569]: Secondary Entries is updated. Mar 7 01:46:45.636789 disk-uuid[569]: Secondary Header is updated. Mar 7 01:46:45.738420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:45.758343 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:45.773825 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:46:45.773914 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:45.787452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:45.787529 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:45.831419 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:45.831497 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:46:45.831519 kernel: ata3.00: applying bridge limits Mar 7 01:46:45.851407 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:46:45.863858 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:45.870333 kernel: ata3.00: configured for UDMA/100 Mar 7 01:46:45.921652 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:46:46.173105 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:46:46.173865 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:46:46.220398 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:46:46.866888 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:46:46.866955 disk-uuid[570]: The operation has completed successfully. Mar 7 01:46:46.982549 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:46:46.982780 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:46:47.055524 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:46:47.085373 sh[604]: Success Mar 7 01:46:47.172405 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:46:47.418115 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:46:47.481663 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:46:47.519803 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:46:47.591863 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:46:47.591901 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:46:47.591916 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:46:47.591932 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:46:47.591945 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:46:47.663630 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:46:47.684342 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:46:47.743631 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:46:47.764660 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:46:47.850608 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:47.850678 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:46:47.850698 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:46:47.882439 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:46:47.924442 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:46:47.952558 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:47.980547 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:46:48.018338 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:46:48.304598 ignition[707]: Ignition 2.19.0 Mar 7 01:46:48.304659 ignition[707]: Stage: fetch-offline Mar 7 01:46:48.304719 ignition[707]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:48.304737 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:48.304883 ignition[707]: parsed url from cmdline: "" Mar 7 01:46:48.304890 ignition[707]: no config URL provided Mar 7 01:46:48.304899 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:46:48.304915 ignition[707]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:46:48.304955 ignition[707]: op(1): [started] loading QEMU firmware config module Mar 7 01:46:48.304965 ignition[707]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:46:48.365134 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:46:48.342838 ignition[707]: op(1): [finished] loading QEMU firmware config module Mar 7 01:46:48.424895 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:46:48.488573 systemd-networkd[792]: lo: Link UP Mar 7 01:46:48.488623 systemd-networkd[792]: lo: Gained carrier Mar 7 01:46:48.497162 systemd-networkd[792]: Enumeration completed Mar 7 01:46:48.497612 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:46:48.505473 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:46:48.505479 systemd-networkd[792]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:46:48.510546 systemd-networkd[792]: eth0: Link UP Mar 7 01:46:48.510554 systemd-networkd[792]: eth0: Gained carrier Mar 7 01:46:48.510569 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:46:48.581147 systemd[1]: Reached target network.target - Network. Mar 7 01:46:48.614144 ignition[707]: parsing config with SHA512: 183a4c66d85e915667557b39c28dcbb9893024afa77ea1b01c4fdb054207930d98b537e24a27496634439cbc56b7f51fb66fe69ec92dda80928cff933fb743bd Mar 7 01:46:48.619501 systemd-networkd[792]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:46:48.632099 unknown[707]: fetched base config from "system" Mar 7 01:46:48.632782 ignition[707]: fetch-offline: fetch-offline passed Mar 7 01:46:48.632116 unknown[707]: fetched user config from "qemu" Mar 7 01:46:48.632878 ignition[707]: Ignition finished successfully Mar 7 01:46:48.641355 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:46:48.650947 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:46:48.683973 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:46:48.746183 ignition[796]: Ignition 2.19.0 Mar 7 01:46:48.746523 ignition[796]: Stage: kargs Mar 7 01:46:48.746769 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:48.746788 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:48.751403 ignition[796]: kargs: kargs passed Mar 7 01:46:48.772718 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:46:48.751477 ignition[796]: Ignition finished successfully Mar 7 01:46:48.824583 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:46:48.894624 ignition[805]: Ignition 2.19.0 Mar 7 01:46:48.894893 ignition[805]: Stage: disks Mar 7 01:46:48.896446 ignition[805]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:48.896468 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:48.916750 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:46:48.897906 ignition[805]: disks: disks passed Mar 7 01:46:48.897981 ignition[805]: Ignition finished successfully Mar 7 01:46:48.940725 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:46:48.940823 systemd-resolved[235]: Detected conflict on linux IN A 10.0.0.112 Mar 7 01:46:48.940835 systemd-resolved[235]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Mar 7 01:46:48.958646 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:46:48.977045 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:46:49.000887 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:46:49.012181 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:46:49.056561 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:46:49.132373 systemd-fsck[816]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:46:49.150047 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:46:49.191148 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:46:49.609427 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:46:49.609373 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:46:49.622687 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:46:49.650538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:46:49.668007 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:46:49.706647 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (824) Mar 7 01:46:49.706729 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:49.706753 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:46:49.706771 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:46:49.680117 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:46:49.680184 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:46:49.680334 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:46:49.726317 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:46:49.744079 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:46:49.746961 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:46:49.784184 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:46:49.894762 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:46:49.909544 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:46:49.921939 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:46:49.933836 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:46:50.174822 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:46:50.196547 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:46:50.203989 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:46:50.228399 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:50.215075 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:46:50.292882 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:46:50.318391 ignition[936]: INFO : Ignition 2.19.0 Mar 7 01:46:50.318391 ignition[936]: INFO : Stage: mount Mar 7 01:46:50.318391 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:50.318391 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:50.335859 ignition[936]: INFO : mount: mount passed Mar 7 01:46:50.335859 ignition[936]: INFO : Ignition finished successfully Mar 7 01:46:50.344655 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:46:50.375060 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:46:50.392834 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:46:50.431073 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (951) Mar 7 01:46:50.445696 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:46:50.445785 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:46:50.445824 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:46:50.476083 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:46:50.481694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:46:50.493052 systemd-networkd[792]: eth0: Gained IPv6LL Mar 7 01:46:50.593470 ignition[968]: INFO : Ignition 2.19.0 Mar 7 01:46:50.593470 ignition[968]: INFO : Stage: files Mar 7 01:46:50.608471 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:50.608471 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:50.608471 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:46:50.608471 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:46:50.608471 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:46:50.660590 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:46:50.660590 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:46:50.677011 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:46:50.671792 unknown[968]: wrote ssh authorized keys file for user: core Mar 7 01:46:50.693142 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:46:50.693142 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:46:50.803399 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:46:50.875163 kernel: hrtimer: interrupt took 2813854 ns Mar 7 01:46:50.999449 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:46:51.017457 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 01:46:51.393559 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:46:52.379008 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:46:52.379008 ignition[968]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:46:52.410438 ignition[968]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:46:52.410438 ignition[968]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:46:52.410438 ignition[968]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:46:52.410438 ignition[968]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:46:52.410438 ignition[968]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:46:52.410438 ignition[968]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:46:52.410438 ignition[968]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:46:52.410438 ignition[968]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:46:52.598445 ignition[968]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:46:52.642983 ignition[968]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:46:52.642983 ignition[968]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:46:52.642983 ignition[968]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:46:52.642983 ignition[968]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:46:52.642983 ignition[968]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:46:52.642983 ignition[968]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:46:52.642983 ignition[968]: INFO : files: files passed Mar 7 01:46:52.642983 ignition[968]: INFO : Ignition finished successfully Mar 7 01:46:52.657763 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:46:52.746847 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:46:52.800543 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:46:52.801529 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:46:52.801741 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:46:52.862377 initrd-setup-root-after-ignition[997]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:46:52.883900 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:46:52.883900 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:46:52.911592 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:46:52.928941 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:46:52.952879 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:46:52.989840 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:46:53.067815 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:46:53.068072 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:46:53.101640 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:46:53.106916 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:46:53.112057 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:46:53.142679 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:46:53.196610 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:46:53.234904 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:46:53.284790 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:46:53.285180 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:46:53.311666 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:46:53.337419 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:46:53.337681 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:46:53.377627 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:46:53.387480 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:46:53.391444 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:46:53.415386 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:46:53.444391 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:46:53.444590 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:46:53.444732 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:46:53.444878 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:46:53.445012 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:46:53.445132 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:46:53.445325 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:46:53.445518 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:46:53.445832 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:46:53.445967 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:46:53.446052 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:46:53.446608 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:46:53.533811 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:46:53.534336 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:46:53.562101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:46:53.564011 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:46:53.597352 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:46:53.610201 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:46:53.622213 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:46:53.665003 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:46:53.711114 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:46:53.757427 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:46:53.757657 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:46:53.765777 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:46:53.765975 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:46:53.777381 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:46:53.777513 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:46:53.787069 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:46:53.787348 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:46:53.819543 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:46:53.831434 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:46:53.864450 ignition[1023]: INFO : Ignition 2.19.0 Mar 7 01:46:53.864450 ignition[1023]: INFO : Stage: umount Mar 7 01:46:53.864450 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:46:53.864450 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:46:53.864450 ignition[1023]: INFO : umount: umount passed Mar 7 01:46:53.864450 ignition[1023]: INFO : Ignition finished successfully Mar 7 01:46:53.840114 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:46:53.841049 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:46:53.856856 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:46:53.857010 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:46:53.872587 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:46:53.872767 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:46:53.901560 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:46:53.902607 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:46:53.902948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:46:53.909727 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:46:53.909915 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:46:53.924077 systemd[1]: Stopped target network.target - Network. Mar 7 01:46:53.935514 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:46:53.935650 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:46:53.944009 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:46:53.944113 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:46:53.949760 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:46:53.949856 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:46:53.963087 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:46:53.963195 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:46:53.980720 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:46:53.980835 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:46:53.993888 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:46:54.008337 systemd-networkd[792]: eth0: DHCPv6 lease lost Mar 7 01:46:54.040727 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:46:54.111043 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:46:54.118055 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:46:54.135584 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:46:54.135788 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:46:54.162147 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:46:54.162383 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:46:54.197791 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:46:54.204319 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:46:54.210176 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:46:54.217955 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:46:54.218065 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:46:54.223422 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:46:54.223505 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:46:54.264988 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:46:54.265831 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:46:54.288769 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:46:54.310837 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:46:54.317923 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:46:54.332958 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:46:54.333574 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:46:54.368601 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:46:54.369181 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:46:54.392159 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:46:54.392649 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:46:54.420699 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:46:54.421166 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:46:54.452588 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:46:54.452731 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:46:54.460168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:46:54.484385 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:46:54.542655 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:46:54.550728 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:46:54.550857 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:46:54.572768 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:46:54.572882 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:46:54.595117 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:46:54.595221 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:46:54.608790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:46:54.608922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:46:54.635982 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:46:54.636196 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:46:54.712809 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:46:54.740877 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:46:54.764121 systemd[1]: Switching root. Mar 7 01:46:54.808174 systemd-journald[194]: Journal stopped Mar 7 01:46:57.878392 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 7 01:46:57.878482 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:46:57.878517 kernel: SELinux: policy capability open_perms=1 Mar 7 01:46:57.878538 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:46:57.878555 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:46:57.878573 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:46:57.878600 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:46:57.878619 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:46:57.878652 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:46:57.878671 kernel: audit: type=1403 audit(1772848015.322:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:46:57.878700 systemd[1]: Successfully loaded SELinux policy in 153.904ms. Mar 7 01:46:57.878732 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 39.354ms. Mar 7 01:46:57.878753 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:46:57.878776 systemd[1]: Detected virtualization kvm. Mar 7 01:46:57.878796 systemd[1]: Detected architecture x86-64. Mar 7 01:46:57.878817 systemd[1]: Detected first boot. Mar 7 01:46:57.878836 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:46:57.878854 zram_generator::config[1067]: No configuration found. Mar 7 01:46:57.878872 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:46:57.878892 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:46:57.878908 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:46:57.878924 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:46:57.878940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:46:57.878955 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:46:57.878970 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:46:57.878986 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:46:57.879006 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:46:57.879036 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:46:57.879055 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:46:57.879074 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:46:57.879094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:46:57.879114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:46:57.879133 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:46:57.879151 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:46:57.879170 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:46:57.879186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:46:57.879206 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:46:57.879223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:46:57.880494 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:46:57.880522 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:46:57.880542 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:46:57.880563 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:46:57.880590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:46:57.880609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:46:57.880631 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:46:57.880647 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:46:57.880663 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:46:57.880679 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:46:57.880695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:46:57.880712 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:46:57.880729 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:46:57.881137 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:46:57.881158 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:46:57.881182 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:46:57.881200 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:46:57.881219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:46:57.881332 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:46:57.881354 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:46:57.881373 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:46:57.881432 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:46:57.881453 systemd[1]: Reached target machines.target - Containers. Mar 7 01:46:57.881475 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:46:57.881492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:46:57.881509 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:46:57.881526 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:46:57.881543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:46:57.881559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:46:57.881576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:46:57.881592 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:46:57.881609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:46:57.881630 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:46:57.881647 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:46:57.881664 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:46:57.881680 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:46:57.881697 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:46:57.881713 kernel: fuse: init (API version 7.39) Mar 7 01:46:57.881732 kernel: loop: module loaded Mar 7 01:46:57.881749 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:46:57.881767 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:46:57.881792 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:46:57.881811 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:46:57.881864 systemd-journald[1151]: Collecting audit messages is disabled. Mar 7 01:46:57.881897 systemd-journald[1151]: Journal started Mar 7 01:46:57.881930 systemd-journald[1151]: Runtime Journal (/run/log/journal/95c0a28957ff4d9fbbfd816fb880f527) is 6.0M, max 48.3M, 42.2M free. Mar 7 01:46:56.676750 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:46:56.718451 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:46:56.721461 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:46:56.721999 systemd[1]: systemd-journald.service: Consumed 1.724s CPU time. Mar 7 01:46:57.910351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:46:57.929657 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:46:57.929854 kernel: ACPI: bus type drm_connector registered Mar 7 01:46:57.929874 systemd[1]: Stopped verity-setup.service. Mar 7 01:46:57.957469 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:46:57.968418 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:46:57.971903 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:46:57.978981 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:46:57.985517 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:46:57.992090 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:46:57.998456 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:46:58.004000 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:46:58.009573 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:46:58.015868 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:46:58.022955 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:46:58.023567 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:46:58.031485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:46:58.031792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:46:58.040037 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:46:58.040816 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:46:58.052538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:46:58.052867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:46:58.066733 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:46:58.069511 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:46:58.086982 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:46:58.087371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:46:58.100729 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:46:58.112226 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:46:58.121739 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:46:58.133838 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:46:58.166941 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:46:58.191841 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:46:58.212936 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:46:58.230548 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:46:58.230667 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:46:58.255491 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:46:58.308142 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:46:58.328974 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:46:58.338767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:46:58.346987 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:46:58.365108 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:46:58.389842 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:46:58.413349 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:46:58.424523 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:46:58.438542 systemd-journald[1151]: Time spent on flushing to /var/log/journal/95c0a28957ff4d9fbbfd816fb880f527 is 24.565ms for 988 entries. Mar 7 01:46:58.438542 systemd-journald[1151]: System Journal (/var/log/journal/95c0a28957ff4d9fbbfd816fb880f527) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:46:58.521892 systemd-journald[1151]: Received client request to flush runtime journal. Mar 7 01:46:58.521965 kernel: loop0: detected capacity change from 0 to 140768 Mar 7 01:46:58.447092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:46:58.468884 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:46:58.483498 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:46:58.510651 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:46:58.520804 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:46:58.528917 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:46:58.540511 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:46:58.548171 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:46:58.555730 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:46:58.562926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:46:58.585671 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:46:58.606612 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:46:58.612954 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 7 01:46:58.613012 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 7 01:46:58.616586 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:46:58.622739 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:46:58.626532 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:46:58.644048 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:46:58.664457 kernel: loop1: detected capacity change from 0 to 217752 Mar 7 01:46:58.695077 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:46:58.697170 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:46:58.739990 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:46:58.758849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:46:58.792393 kernel: loop2: detected capacity change from 0 to 142488 Mar 7 01:46:58.828076 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Mar 7 01:46:58.828140 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Mar 7 01:46:58.837853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:46:58.903847 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 01:46:58.947428 kernel: loop4: detected capacity change from 0 to 217752 Mar 7 01:46:59.019745 kernel: loop5: detected capacity change from 0 to 142488 Mar 7 01:46:59.074481 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:46:59.075563 (sd-merge)[1210]: Merged extensions into '/usr'. Mar 7 01:46:59.080882 systemd[1]: Reloading requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:46:59.080904 systemd[1]: Reloading... Mar 7 01:46:59.167380 zram_generator::config[1235]: No configuration found. Mar 7 01:46:59.371982 ldconfig[1178]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:46:59.404789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:46:59.484486 systemd[1]: Reloading finished in 402 ms. Mar 7 01:46:59.543625 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:46:59.550426 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:46:59.559159 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:46:59.592672 systemd[1]: Starting ensure-sysext.service... Mar 7 01:46:59.607470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:46:59.621483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:46:59.630362 systemd[1]: Reloading requested from client PID 1274 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:46:59.630380 systemd[1]: Reloading... Mar 7 01:46:59.685927 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:46:59.686587 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:46:59.688687 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:46:59.689062 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Mar 7 01:46:59.689170 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Mar 7 01:46:59.697633 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:46:59.697840 systemd-tmpfiles[1275]: Skipping /boot Mar 7 01:46:59.730116 systemd-udevd[1276]: Using default interface naming scheme 'v255'. Mar 7 01:46:59.754802 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:46:59.754821 systemd-tmpfiles[1275]: Skipping /boot Mar 7 01:46:59.765364 zram_generator::config[1300]: No configuration found. Mar 7 01:46:59.906469 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1313) Mar 7 01:47:00.075912 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:47:00.087209 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:47:00.109538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:47:00.158713 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 7 01:47:00.176722 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:47:00.215140 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:47:00.216161 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:47:00.489813 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:47:01.114614 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:47:01.115212 systemd[1]: Reloading finished in 1484 ms. Mar 7 01:47:01.128481 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:47:01.147735 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:47:01.222773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:47:01.360858 systemd[1]: Finished ensure-sysext.service. Mar 7 01:47:01.428762 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:47:01.567422 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:47:01.593126 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:47:01.607092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:47:01.618752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:47:01.633473 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:47:01.651908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:47:01.670818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:47:01.679639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:47:01.770517 augenrules[1392]: No rules Mar 7 01:47:01.770541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:47:01.805738 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:47:01.838226 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:47:01.883906 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:47:01.901574 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:47:01.909811 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:47:01.917460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:47:01.923922 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:47:01.925757 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:47:01.932654 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:47:01.939568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:47:01.939972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:47:01.946723 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:47:01.947047 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:47:01.957057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:47:01.957491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:47:01.965587 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:47:01.966016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:47:01.973837 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:47:01.983494 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:47:02.013471 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:47:02.014740 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:47:02.104729 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:47:02.120663 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:47:02.120819 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:47:02.126817 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:47:02.146717 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:47:02.188647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:47:02.194479 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:47:02.273186 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:47:02.273751 kernel: kvm_amd: TSC scaling supported Mar 7 01:47:02.273801 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:47:02.273858 kernel: kvm_amd: Nested Paging enabled Mar 7 01:47:02.281030 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:47:02.281104 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:47:02.508157 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:47:02.514561 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:47:02.530325 systemd-resolved[1400]: Positive Trust Anchors: Mar 7 01:47:02.530347 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:47:02.530396 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:47:02.544727 systemd-resolved[1400]: Defaulting to hostname 'linux'. Mar 7 01:47:02.545519 systemd-networkd[1398]: lo: Link UP Mar 7 01:47:02.545563 systemd-networkd[1398]: lo: Gained carrier Mar 7 01:47:02.548358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:47:02.549565 systemd-networkd[1398]: Enumeration completed Mar 7 01:47:02.551010 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:47:02.551017 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:47:02.553105 systemd-networkd[1398]: eth0: Link UP Mar 7 01:47:02.553114 systemd-networkd[1398]: eth0: Gained carrier Mar 7 01:47:02.553131 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:47:02.567154 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:47:02.583110 systemd[1]: Reached target network.target - Network. Mar 7 01:47:02.601657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:47:02.746903 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:47:02.758003 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:47:02.760766 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Mar 7 01:47:03.464164 systemd-resolved[1400]: Clock change detected. Flushing caches. Mar 7 01:47:03.464232 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:47:03.464294 systemd-timesyncd[1401]: Initial clock synchronization to Sat 2026-03-07 01:47:03.464084 UTC. Mar 7 01:47:03.515675 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:47:03.564306 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:47:03.596811 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:47:03.627652 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:47:03.663762 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:47:03.679674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:47:03.687349 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:47:03.696187 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:47:03.704868 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:47:03.713165 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:47:03.721132 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:47:03.730310 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:47:03.738022 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:47:03.738111 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:47:03.743180 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:47:03.761913 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:47:03.776927 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:47:03.799190 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:47:03.810929 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:47:03.822179 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:47:03.829936 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:47:03.834838 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:47:03.840918 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:47:03.841009 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:47:03.843941 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:47:03.847100 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:47:03.868211 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:47:03.883165 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:47:03.901066 jq[1439]: false Mar 7 01:47:03.903110 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:47:03.911778 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:47:03.915999 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:47:03.930305 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:47:03.941030 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:47:03.957973 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:47:03.966955 extend-filesystems[1440]: Found loop3 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found loop4 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found loop5 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found sr0 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found vda Mar 7 01:47:03.971641 extend-filesystems[1440]: Found vda1 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found vda2 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found vda3 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found usr Mar 7 01:47:03.971641 extend-filesystems[1440]: Found vda4 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found vda6 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found vda7 Mar 7 01:47:03.971641 extend-filesystems[1440]: Found vda9 Mar 7 01:47:03.971641 extend-filesystems[1440]: Checking size of /dev/vda9 Mar 7 01:47:04.082738 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1317) Mar 7 01:47:04.083002 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:47:04.083032 extend-filesystems[1440]: Resized partition /dev/vda9 Mar 7 01:47:03.985206 dbus-daemon[1438]: [system] SELinux support is enabled Mar 7 01:47:03.994018 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:47:04.084304 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:47:04.025698 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:47:04.027175 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:47:04.094291 jq[1460]: true Mar 7 01:47:04.029858 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:47:04.055871 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:47:04.110689 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:47:04.123847 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:47:04.191345 update_engine[1459]: I20260307 01:47:04.175130 1459 main.cc:92] Flatcar Update Engine starting Mar 7 01:47:04.191345 update_engine[1459]: I20260307 01:47:04.179075 1459 update_check_scheduler.cc:74] Next update check in 3m6s Mar 7 01:47:04.133717 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:47:04.203069 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:47:04.203069 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:47:04.203069 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:47:04.169186 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:47:04.238299 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Mar 7 01:47:04.170290 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:47:04.172477 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:47:04.173774 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:47:04.192148 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:47:04.192184 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:47:04.197717 systemd-logind[1455]: New seat seat0. Mar 7 01:47:04.199641 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:47:04.200086 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:47:04.208041 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:47:04.242338 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:47:04.242789 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:47:04.281290 jq[1467]: true Mar 7 01:47:04.286267 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:47:04.323128 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:47:04.325353 tar[1464]: linux-amd64/LICENSE Mar 7 01:47:04.325353 tar[1464]: linux-amd64/helm Mar 7 01:47:04.362284 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:47:04.379033 systemd-networkd[1398]: eth0: Gained IPv6LL Mar 7 01:47:04.383114 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:47:04.385705 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:47:04.400211 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:47:04.400480 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:47:04.410751 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:47:04.412223 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:47:04.440359 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:47:04.463136 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:47:04.469160 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:47:04.477182 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:47:04.482323 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:47:04.499479 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:47:04.529136 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:47:04.553796 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:47:04.569411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:47:04.582811 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:47:04.601661 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:47:04.618162 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:50438.service - OpenSSH per-connection server daemon (10.0.0.1:50438). Mar 7 01:47:04.627423 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:47:04.633037 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:47:04.633805 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:47:04.703444 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:47:04.718897 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:47:04.720868 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:47:04.740060 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:47:04.747227 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:47:04.777782 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:47:04.783335 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:47:04.801659 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:47:04.856001 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:47:04.877944 containerd[1468]: time="2026-03-07T01:47:04.877427696Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:47:04.925174 containerd[1468]: time="2026-03-07T01:47:04.925019980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:04.949713 containerd[1468]: time="2026-03-07T01:47:04.949634655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:04.950697 containerd[1468]: time="2026-03-07T01:47:04.950664446Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:47:04.950793 containerd[1468]: time="2026-03-07T01:47:04.950777207Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:47:04.951163 containerd[1468]: time="2026-03-07T01:47:04.951139524Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:47:04.951244 containerd[1468]: time="2026-03-07T01:47:04.951229742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:04.956307 containerd[1468]: time="2026-03-07T01:47:04.956277800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:04.956406 containerd[1468]: time="2026-03-07T01:47:04.956387013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:04.956963 containerd[1468]: time="2026-03-07T01:47:04.956829960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:04.957051 containerd[1468]: time="2026-03-07T01:47:04.957032338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:04.957116 containerd[1468]: time="2026-03-07T01:47:04.957099654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:04.957189 containerd[1468]: time="2026-03-07T01:47:04.957173842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:04.957366 containerd[1468]: time="2026-03-07T01:47:04.957347316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:04.958003 containerd[1468]: time="2026-03-07T01:47:04.957977873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:47:04.958266 containerd[1468]: time="2026-03-07T01:47:04.958244802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:47:04.958337 containerd[1468]: time="2026-03-07T01:47:04.958320503Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:47:04.958504 containerd[1468]: time="2026-03-07T01:47:04.958486913Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:47:04.958744 containerd[1468]: time="2026-03-07T01:47:04.958725359Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:47:04.984437 containerd[1468]: time="2026-03-07T01:47:04.984222105Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:47:04.985407 containerd[1468]: time="2026-03-07T01:47:04.985186476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:47:04.985799 containerd[1468]: time="2026-03-07T01:47:04.985686178Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:47:04.986160 containerd[1468]: time="2026-03-07T01:47:04.986131209Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:47:04.986971 containerd[1468]: time="2026-03-07T01:47:04.986941902Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:47:04.987373 containerd[1468]: time="2026-03-07T01:47:04.987348251Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:47:04.988929 containerd[1468]: time="2026-03-07T01:47:04.988806965Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:47:04.989202 containerd[1468]: time="2026-03-07T01:47:04.989175413Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:47:04.989461 containerd[1468]: time="2026-03-07T01:47:04.989437943Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:47:04.989668 containerd[1468]: time="2026-03-07T01:47:04.989645841Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:47:04.989761 containerd[1468]: time="2026-03-07T01:47:04.989738043Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:47:04.989958 containerd[1468]: time="2026-03-07T01:47:04.989832600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:47:04.990033 containerd[1468]: time="2026-03-07T01:47:04.990016543Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:47:04.990095 containerd[1468]: time="2026-03-07T01:47:04.990080833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:47:04.990158 containerd[1468]: time="2026-03-07T01:47:04.990142678Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:47:04.990257 containerd[1468]: time="2026-03-07T01:47:04.990226995Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:47:04.991056 containerd[1468]: time="2026-03-07T01:47:04.990810896Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:47:04.991056 containerd[1468]: time="2026-03-07T01:47:04.990963020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:47:04.991133 containerd[1468]: time="2026-03-07T01:47:04.991089576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991133 containerd[1468]: time="2026-03-07T01:47:04.991112579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991207 containerd[1468]: time="2026-03-07T01:47:04.991132075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991207 containerd[1468]: time="2026-03-07T01:47:04.991153626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991207 containerd[1468]: time="2026-03-07T01:47:04.991171339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991207 containerd[1468]: time="2026-03-07T01:47:04.991189052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991325 containerd[1468]: time="2026-03-07T01:47:04.991206074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991325 containerd[1468]: time="2026-03-07T01:47:04.991225199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991471 containerd[1468]: time="2026-03-07T01:47:04.991250677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991471 containerd[1468]: time="2026-03-07T01:47:04.991455920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991639 containerd[1468]: time="2026-03-07T01:47:04.991481127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991639 containerd[1468]: time="2026-03-07T01:47:04.991507977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991639 containerd[1468]: time="2026-03-07T01:47:04.991618444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991769 containerd[1468]: time="2026-03-07T01:47:04.991652227Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:47:04.991769 containerd[1468]: time="2026-03-07T01:47:04.991686571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991769 containerd[1468]: time="2026-03-07T01:47:04.991704754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.991769 containerd[1468]: time="2026-03-07T01:47:04.991720725Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:47:04.999008 containerd[1468]: time="2026-03-07T01:47:04.992751180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:47:04.999008 containerd[1468]: time="2026-03-07T01:47:04.992788549Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:47:04.999008 containerd[1468]: time="2026-03-07T01:47:04.992805291Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:47:04.999008 containerd[1468]: time="2026-03-07T01:47:04.992822062Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:47:04.999008 containerd[1468]: time="2026-03-07T01:47:04.992837841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.999008 containerd[1468]: time="2026-03-07T01:47:04.992858130Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:47:04.999008 containerd[1468]: time="2026-03-07T01:47:04.992875271Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:47:04.999008 containerd[1468]: time="2026-03-07T01:47:04.992894026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.993406403Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.993498044Z" level=info msg="Connect containerd service" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.993649196Z" level=info msg="using legacy CRI server" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.993665466Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.993783056Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.997081163Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.997695729Z" level=info msg="Start subscribing containerd event" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.997775338Z" level=info msg="Start recovering state" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.998010206Z" level=info msg="Start event monitor" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.998048598Z" level=info msg="Start snapshots syncer" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.998066502Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:47:04.999353 containerd[1468]: time="2026-03-07T01:47:04.998078674Z" level=info msg="Start streaming server" Mar 7 01:47:05.003309 containerd[1468]: time="2026-03-07T01:47:05.002657314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:47:05.003510 containerd[1468]: time="2026-03-07T01:47:05.003489126Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:47:05.003917 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:47:05.004194 containerd[1468]: time="2026-03-07T01:47:05.004174125Z" level=info msg="containerd successfully booted in 0.147962s" Mar 7 01:47:05.064128 sshd[1521]: Accepted publickey for core from 10.0.0.1 port 50438 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:47:05.075040 sshd[1521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:05.114190 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:47:05.114619 systemd-logind[1455]: New session 1 of user core. Mar 7 01:47:05.140607 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:47:05.203655 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:47:05.266439 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:47:05.291303 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:47:05.567089 systemd[1547]: Queued start job for default target default.target. Mar 7 01:47:05.579063 systemd[1547]: Created slice app.slice - User Application Slice. Mar 7 01:47:05.579098 systemd[1547]: Reached target paths.target - Paths. Mar 7 01:47:05.579116 systemd[1547]: Reached target timers.target - Timers. Mar 7 01:47:05.597234 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:47:05.613973 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:47:05.615392 systemd[1547]: Reached target sockets.target - Sockets. Mar 7 01:47:05.615425 systemd[1547]: Reached target basic.target - Basic System. Mar 7 01:47:05.615503 systemd[1547]: Reached target default.target - Main User Target. Mar 7 01:47:05.615666 systemd[1547]: Startup finished in 289ms. Mar 7 01:47:05.615791 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:47:05.642511 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:47:05.665308 tar[1464]: linux-amd64/README.md Mar 7 01:47:05.724313 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:47:05.797066 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:50460.service - OpenSSH per-connection server daemon (10.0.0.1:50460). Mar 7 01:47:05.903192 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 50460 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:47:05.911047 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:05.935226 systemd-logind[1455]: New session 2 of user core. Mar 7 01:47:05.957631 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:47:06.066012 sshd[1561]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:06.111987 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:50460.service: Deactivated successfully. Mar 7 01:47:06.126346 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:47:06.129448 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:47:06.179432 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:50474.service - OpenSSH per-connection server daemon (10.0.0.1:50474). Mar 7 01:47:06.250897 systemd-logind[1455]: Removed session 2. Mar 7 01:47:06.364343 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 50474 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:47:06.373714 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:06.429196 systemd-logind[1455]: New session 3 of user core. Mar 7 01:47:06.461630 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:47:06.603025 sshd[1568]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:06.625352 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:50474.service: Deactivated successfully. Mar 7 01:47:06.634804 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:47:06.648358 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:47:06.663170 systemd-logind[1455]: Removed session 3. Mar 7 01:47:07.106875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:47:07.108177 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:47:07.121976 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:47:07.136124 systemd[1]: Startup finished in 4.351s (kernel) + 16.274s (initrd) + 11.264s (userspace) = 31.890s. Mar 7 01:47:08.384993 kubelet[1579]: E0307 01:47:08.384659 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:47:08.391708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:47:08.392120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:47:08.393289 systemd[1]: kubelet.service: Consumed 1.528s CPU time. Mar 7 01:47:16.676864 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:34066.service - OpenSSH per-connection server daemon (10.0.0.1:34066). Mar 7 01:47:16.773667 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 34066 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:47:16.776342 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:16.841969 systemd-logind[1455]: New session 4 of user core. Mar 7 01:47:16.855987 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:47:16.965450 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:16.985947 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:34066.service: Deactivated successfully. Mar 7 01:47:16.989094 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:47:17.016937 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:47:17.031322 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:34074.service - OpenSSH per-connection server daemon (10.0.0.1:34074). Mar 7 01:47:17.039732 systemd-logind[1455]: Removed session 4. Mar 7 01:47:17.093505 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 34074 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:47:17.092428 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:17.124024 systemd-logind[1455]: New session 5 of user core. Mar 7 01:47:17.145914 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:47:17.257769 sshd[1600]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:17.320936 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:34074.service: Deactivated successfully. Mar 7 01:47:17.333038 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:47:17.356728 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:47:17.381275 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:34076.service - OpenSSH per-connection server daemon (10.0.0.1:34076). Mar 7 01:47:17.403262 systemd-logind[1455]: Removed session 5. Mar 7 01:47:17.589022 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 34076 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:47:17.604785 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:17.638095 systemd-logind[1455]: New session 6 of user core. Mar 7 01:47:17.647146 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:47:17.738727 sshd[1607]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:17.770430 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:34076.service: Deactivated successfully. Mar 7 01:47:17.782082 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:47:17.787933 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:47:17.820309 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:34090.service - OpenSSH per-connection server daemon (10.0.0.1:34090). Mar 7 01:47:17.824970 systemd-logind[1455]: Removed session 6. Mar 7 01:47:17.905155 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 34090 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:47:17.918055 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:47:17.965136 systemd-logind[1455]: New session 7 of user core. Mar 7 01:47:17.979896 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:47:18.127438 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:47:18.134239 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:47:18.643099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:47:18.701486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:47:20.123815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:47:20.127195 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:47:20.538288 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:47:20.546145 (dockerd)[1652]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:47:20.688511 kubelet[1644]: E0307 01:47:20.687692 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:47:20.723365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:47:20.723801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:47:22.659068 dockerd[1652]: time="2026-03-07T01:47:22.658401109Z" level=info msg="Starting up" Mar 7 01:47:23.432447 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3769109827-merged.mount: Deactivated successfully. Mar 7 01:47:23.713433 dockerd[1652]: time="2026-03-07T01:47:23.710148186Z" level=info msg="Loading containers: start." Mar 7 01:47:24.767515 kernel: Initializing XFRM netlink socket Mar 7 01:47:25.365983 systemd-networkd[1398]: docker0: Link UP Mar 7 01:47:25.485074 dockerd[1652]: time="2026-03-07T01:47:25.484503735Z" level=info msg="Loading containers: done." Mar 7 01:47:25.612732 dockerd[1652]: time="2026-03-07T01:47:25.610823428Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:47:25.612732 dockerd[1652]: time="2026-03-07T01:47:25.611014926Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:47:25.612732 dockerd[1652]: time="2026-03-07T01:47:25.611181697Z" level=info msg="Daemon has completed initialization" Mar 7 01:47:25.950699 dockerd[1652]: time="2026-03-07T01:47:25.947103079Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:47:25.948822 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:47:28.174813 containerd[1468]: time="2026-03-07T01:47:28.173163553Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 01:47:29.780995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446973899.mount: Deactivated successfully. Mar 7 01:47:30.822460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:47:30.868944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:47:31.406092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:47:31.573371 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:47:32.584012 kubelet[1838]: E0307 01:47:32.582888 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:47:32.685455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:47:32.704742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:47:32.706029 systemd[1]: kubelet.service: Consumed 1.142s CPU time. Mar 7 01:47:42.394377 containerd[1468]: time="2026-03-07T01:47:42.391280106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:42.418797 containerd[1468]: time="2026-03-07T01:47:42.418392218Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 7 01:47:42.430097 containerd[1468]: time="2026-03-07T01:47:42.429972691Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:42.461427 containerd[1468]: time="2026-03-07T01:47:42.458024520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:42.473623 containerd[1468]: time="2026-03-07T01:47:42.472761443Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 14.29659648s" Mar 7 01:47:42.473623 containerd[1468]: time="2026-03-07T01:47:42.472844949Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 01:47:42.488277 containerd[1468]: time="2026-03-07T01:47:42.483660657Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 01:47:42.830428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:47:42.854435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:47:43.862779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:47:43.987457 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:47:45.568643 kubelet[1887]: E0307 01:47:45.568025 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:47:45.591461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:47:45.592245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:47:45.599246 systemd[1]: kubelet.service: Consumed 1.508s CPU time. Mar 7 01:47:49.310154 update_engine[1459]: I20260307 01:47:49.278360 1459 update_attempter.cc:509] Updating boot flags... Mar 7 01:47:50.403470 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1908) Mar 7 01:47:51.557645 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1910) Mar 7 01:47:54.970473 containerd[1468]: time="2026-03-07T01:47:54.968679603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:54.976753 containerd[1468]: time="2026-03-07T01:47:54.976348857Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 7 01:47:54.984126 containerd[1468]: time="2026-03-07T01:47:54.982839240Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:55.031872 containerd[1468]: time="2026-03-07T01:47:55.030901273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:47:55.057388 containerd[1468]: time="2026-03-07T01:47:55.047927243Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 12.564125982s" Mar 7 01:47:55.057388 containerd[1468]: time="2026-03-07T01:47:55.052879718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 01:47:55.185121 containerd[1468]: time="2026-03-07T01:47:55.181022157Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 01:47:55.824360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:47:55.873415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:47:58.400957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:47:58.408250 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:47:59.732495 kubelet[1927]: E0307 01:47:59.732349 1927 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:47:59.778969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:47:59.781391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:47:59.782048 systemd[1]: kubelet.service: Consumed 1.446s CPU time. Mar 7 01:48:04.515830 containerd[1468]: time="2026-03-07T01:48:04.512771223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:04.520370 containerd[1468]: time="2026-03-07T01:48:04.520045915Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 7 01:48:04.526654 containerd[1468]: time="2026-03-07T01:48:04.526208811Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:04.559114 containerd[1468]: time="2026-03-07T01:48:04.554655606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:04.561675 containerd[1468]: time="2026-03-07T01:48:04.560720629Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 9.379518474s" Mar 7 01:48:04.561675 containerd[1468]: time="2026-03-07T01:48:04.560797929Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 01:48:04.564019 containerd[1468]: time="2026-03-07T01:48:04.563739055Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 01:48:09.830252 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:48:09.925137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:11.928002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:11.933726 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:48:12.603110 kubelet[1948]: E0307 01:48:12.602729 1948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:48:12.618973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:48:12.619313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:48:12.620324 systemd[1]: kubelet.service: Consumed 1.774s CPU time. Mar 7 01:48:14.049443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17557835.mount: Deactivated successfully. Mar 7 01:48:21.402736 containerd[1468]: time="2026-03-07T01:48:21.402045080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:21.415485 containerd[1468]: time="2026-03-07T01:48:21.414799750Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 7 01:48:21.420438 containerd[1468]: time="2026-03-07T01:48:21.420222866Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:21.433004 containerd[1468]: time="2026-03-07T01:48:21.432917472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:21.434439 containerd[1468]: time="2026-03-07T01:48:21.434244234Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 16.870420686s" Mar 7 01:48:21.434439 containerd[1468]: time="2026-03-07T01:48:21.434340893Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 01:48:21.438776 containerd[1468]: time="2026-03-07T01:48:21.438686822Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 01:48:22.764102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:48:22.796081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:22.852187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056117276.mount: Deactivated successfully. Mar 7 01:48:24.974806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:25.037228 (kubelet)[1980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:48:26.414457 kubelet[1980]: E0307 01:48:26.414211 1980 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:48:26.420005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:48:26.420299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:48:26.421081 systemd[1]: kubelet.service: Consumed 1.823s CPU time. Mar 7 01:48:36.579804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 01:48:36.630876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:38.258813 containerd[1468]: time="2026-03-07T01:48:38.255094449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:38.275908 containerd[1468]: time="2026-03-07T01:48:38.275114055Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 7 01:48:38.304902 containerd[1468]: time="2026-03-07T01:48:38.284412705Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:38.332004 containerd[1468]: time="2026-03-07T01:48:38.328192890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:38.352486 containerd[1468]: time="2026-03-07T01:48:38.350111901Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 16.91133326s" Mar 7 01:48:38.352486 containerd[1468]: time="2026-03-07T01:48:38.350268031Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 01:48:38.353599 containerd[1468]: time="2026-03-07T01:48:38.353166885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:48:38.604906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:38.608066 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:48:39.373963 kubelet[2041]: E0307 01:48:39.373791 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:48:39.380367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:48:39.380757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:48:39.381268 systemd[1]: kubelet.service: Consumed 1.802s CPU time. Mar 7 01:48:40.065934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81266157.mount: Deactivated successfully. Mar 7 01:48:40.098192 containerd[1468]: time="2026-03-07T01:48:40.097029266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:40.104215 containerd[1468]: time="2026-03-07T01:48:40.101504336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 01:48:40.106703 containerd[1468]: time="2026-03-07T01:48:40.106630218Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:40.119381 containerd[1468]: time="2026-03-07T01:48:40.118162593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:40.120404 containerd[1468]: time="2026-03-07T01:48:40.120160520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.766936848s" Mar 7 01:48:40.120404 containerd[1468]: time="2026-03-07T01:48:40.120243431Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:48:40.130733 containerd[1468]: time="2026-03-07T01:48:40.130505068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 01:48:41.018658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124231686.mount: Deactivated successfully. Mar 7 01:48:45.485332 containerd[1468]: time="2026-03-07T01:48:45.484519774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:45.497505 containerd[1468]: time="2026-03-07T01:48:45.494280983Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 7 01:48:45.502484 containerd[1468]: time="2026-03-07T01:48:45.501072370Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:45.511232 containerd[1468]: time="2026-03-07T01:48:45.510473847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:48:45.513125 containerd[1468]: time="2026-03-07T01:48:45.512907782Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 5.382238658s" Mar 7 01:48:45.513125 containerd[1468]: time="2026-03-07T01:48:45.512979528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 01:48:49.573415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 7 01:48:49.610503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:50.148135 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:48:50.149415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:50.183384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:50.268081 systemd[1]: Reloading requested from client PID 2155 ('systemctl') (unit session-7.scope)... Mar 7 01:48:50.268137 systemd[1]: Reloading... Mar 7 01:48:50.516674 zram_generator::config[2194]: No configuration found. Mar 7 01:48:50.976130 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:48:51.159425 systemd[1]: Reloading finished in 890 ms. Mar 7 01:48:51.371475 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:48:51.371740 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:48:51.376300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:51.407852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:48:51.960417 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:48:51.961683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:48:52.445987 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:48:53.659171 kubelet[2242]: I0307 01:48:53.649379 2242 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:48:53.659171 kubelet[2242]: I0307 01:48:53.656846 2242 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:48:53.659171 kubelet[2242]: I0307 01:48:53.656880 2242 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:48:53.659171 kubelet[2242]: I0307 01:48:53.656891 2242 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:48:53.659171 kubelet[2242]: I0307 01:48:53.657761 2242 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:48:53.929604 kubelet[2242]: E0307 01:48:53.928151 2242 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:48:53.945618 kubelet[2242]: I0307 01:48:53.940747 2242 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:48:54.279779 kubelet[2242]: E0307 01:48:54.279126 2242 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:48:54.279779 kubelet[2242]: I0307 01:48:54.279234 2242 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:48:54.351820 kubelet[2242]: I0307 01:48:54.350646 2242 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:48:54.365614 kubelet[2242]: I0307 01:48:54.360005 2242 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:48:54.365614 kubelet[2242]: I0307 01:48:54.360128 2242 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:48:54.365614 kubelet[2242]: I0307 01:48:54.360412 2242 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:48:54.365614 kubelet[2242]: I0307 01:48:54.360427 2242 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:48:54.366089 kubelet[2242]: I0307 01:48:54.360907 2242 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:48:54.412972 kubelet[2242]: I0307 01:48:54.409903 2242 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:48:54.414644 kubelet[2242]: I0307 01:48:54.414002 2242 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:48:54.414644 kubelet[2242]: I0307 01:48:54.414061 2242 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:48:54.414644 kubelet[2242]: I0307 01:48:54.414105 2242 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:48:54.414644 kubelet[2242]: I0307 01:48:54.414120 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:48:54.446219 kubelet[2242]: I0307 01:48:54.440954 2242 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:48:54.470980 kubelet[2242]: I0307 01:48:54.466157 2242 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:48:54.470980 kubelet[2242]: I0307 01:48:54.466213 2242 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:48:54.470980 kubelet[2242]: W0307 01:48:54.466459 2242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:48:54.549721 kubelet[2242]: I0307 01:48:54.540881 2242 server.go:1257] "Started kubelet" Mar 7 01:48:54.554620 kubelet[2242]: I0307 01:48:54.549668 2242 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:48:54.554620 kubelet[2242]: I0307 01:48:54.551311 2242 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:48:54.562791 kubelet[2242]: I0307 01:48:54.560264 2242 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:48:54.564421 kubelet[2242]: I0307 01:48:54.564386 2242 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:48:54.602929 kubelet[2242]: I0307 01:48:54.599237 2242 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:48:54.602929 kubelet[2242]: I0307 01:48:54.597895 2242 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:48:54.609917 kubelet[2242]: E0307 01:48:54.600294 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6bfc4403707d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:48:54.540718205 +0000 UTC m=+2.538269072,LastTimestamp:2026-03-07 01:48:54.540718205 +0000 UTC m=+2.538269072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:48:54.609917 kubelet[2242]: I0307 01:48:54.608925 2242 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:48:54.626069 kubelet[2242]: I0307 01:48:54.623290 2242 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:48:54.626069 kubelet[2242]: E0307 01:48:54.626154 2242 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:48:54.635444 kubelet[2242]: I0307 01:48:54.626845 2242 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:48:54.635444 kubelet[2242]: E0307 01:48:54.626983 2242 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Mar 7 01:48:54.635444 kubelet[2242]: I0307 01:48:54.627064 2242 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:48:54.659673 kubelet[2242]: I0307 01:48:54.657196 2242 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:48:54.672430 kubelet[2242]: E0307 01:48:54.667884 2242 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:48:54.678500 kubelet[2242]: I0307 01:48:54.672658 2242 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:48:54.678500 kubelet[2242]: I0307 01:48:54.672710 2242 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:48:54.730718 kubelet[2242]: E0307 01:48:54.727818 2242 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:48:54.798006 kubelet[2242]: I0307 01:48:54.786687 2242 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:48:54.798006 kubelet[2242]: I0307 01:48:54.786721 2242 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:48:54.798006 kubelet[2242]: I0307 01:48:54.786758 2242 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:48:54.813049 kubelet[2242]: I0307 01:48:54.812898 2242 policy_none.go:50] "Start" Mar 7 01:48:54.813049 kubelet[2242]: I0307 01:48:54.812963 2242 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:48:54.813049 kubelet[2242]: I0307 01:48:54.812983 2242 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:48:54.820675 kubelet[2242]: I0307 01:48:54.820455 2242 policy_none.go:44] "Start" Mar 7 01:48:54.828738 kubelet[2242]: E0307 01:48:54.828034 2242 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:48:54.833314 kubelet[2242]: E0307 01:48:54.833083 2242 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Mar 7 01:48:54.837493 kubelet[2242]: I0307 01:48:54.836312 2242 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:48:54.842686 kubelet[2242]: I0307 01:48:54.842072 2242 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:48:54.842686 kubelet[2242]: I0307 01:48:54.842100 2242 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:48:54.842686 kubelet[2242]: I0307 01:48:54.842128 2242 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:48:54.842686 kubelet[2242]: E0307 01:48:54.842198 2242 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:48:54.871791 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:48:54.917685 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:48:54.929739 kubelet[2242]: E0307 01:48:54.929324 2242 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:48:54.936667 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:48:54.945233 kubelet[2242]: E0307 01:48:54.943463 2242 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:48:54.986226 kubelet[2242]: E0307 01:48:54.985103 2242 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:48:55.009940 kubelet[2242]: I0307 01:48:54.993213 2242 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:48:55.009940 kubelet[2242]: I0307 01:48:55.002166 2242 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:48:55.009940 kubelet[2242]: I0307 01:48:55.002855 2242 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:48:55.009940 kubelet[2242]: E0307 01:48:55.009317 2242 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:48:55.009940 kubelet[2242]: E0307 01:48:55.009935 2242 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:48:55.117239 kubelet[2242]: I0307 01:48:55.116342 2242 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:48:55.117239 kubelet[2242]: E0307 01:48:55.116971 2242 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 7 01:48:55.244333 kubelet[2242]: I0307 01:48:55.239229 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d19c38559edea5810548fa678617ca5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d19c38559edea5810548fa678617ca5\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:48:55.244333 kubelet[2242]: I0307 01:48:55.239344 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d19c38559edea5810548fa678617ca5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d19c38559edea5810548fa678617ca5\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:48:55.244333 kubelet[2242]: I0307 01:48:55.239434 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d19c38559edea5810548fa678617ca5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d19c38559edea5810548fa678617ca5\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:48:55.256989 kubelet[2242]: E0307 01:48:55.253137 2242 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Mar 7 01:48:55.303384 systemd[1]: Created slice kubepods-burstable-pod9d19c38559edea5810548fa678617ca5.slice - libcontainer container kubepods-burstable-pod9d19c38559edea5810548fa678617ca5.slice. Mar 7 01:48:55.319697 kubelet[2242]: I0307 01:48:55.319051 2242 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:48:55.319697 kubelet[2242]: E0307 01:48:55.319426 2242 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 7 01:48:55.347433 kubelet[2242]: E0307 01:48:55.343695 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:48:55.351981 kubelet[2242]: I0307 01:48:55.342461 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:48:55.351981 kubelet[2242]: I0307 01:48:55.350942 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:48:55.351981 kubelet[2242]: I0307 01:48:55.350972 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:48:55.363910 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 7 01:48:55.379662 kubelet[2242]: I0307 01:48:55.377693 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:48:55.379662 kubelet[2242]: I0307 01:48:55.377827 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:48:55.379662 kubelet[2242]: I0307 01:48:55.377928 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:48:55.386805 kubelet[2242]: E0307 01:48:55.386057 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:48:55.437106 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 7 01:48:55.457319 kubelet[2242]: E0307 01:48:55.457234 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:48:55.661810 kubelet[2242]: E0307 01:48:55.660159 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:48:55.663323 containerd[1468]: time="2026-03-07T01:48:55.663188075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d19c38559edea5810548fa678617ca5,Namespace:kube-system,Attempt:0,}" Mar 7 01:48:55.702925 kubelet[2242]: E0307 01:48:55.700124 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:48:55.713809 containerd[1468]: time="2026-03-07T01:48:55.706627508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 7 01:48:55.725977 kubelet[2242]: I0307 01:48:55.725353 2242 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:48:55.726466 kubelet[2242]: E0307 01:48:55.726018 2242 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 7 01:48:55.780029 kubelet[2242]: E0307 01:48:55.779144 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:48:55.781307 containerd[1468]: time="2026-03-07T01:48:55.781246650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 7 01:48:56.061070 kubelet[2242]: E0307 01:48:56.060090 2242 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:48:56.061070 kubelet[2242]: E0307 01:48:56.060452 2242 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" Mar 7 01:48:56.546158 kubelet[2242]: I0307 01:48:56.542701 2242 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:48:56.546158 kubelet[2242]: E0307 01:48:56.544632 2242 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 7 01:48:56.633103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189465208.mount: Deactivated successfully. Mar 7 01:48:56.700026 containerd[1468]: time="2026-03-07T01:48:56.698187661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:48:56.723021 containerd[1468]: time="2026-03-07T01:48:56.721422202Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:48:56.727764 containerd[1468]: time="2026-03-07T01:48:56.726429326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:48:56.727764 containerd[1468]: time="2026-03-07T01:48:56.726625356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:48:56.733349 containerd[1468]: time="2026-03-07T01:48:56.733140874Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:48:56.746935 containerd[1468]: time="2026-03-07T01:48:56.745231877Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:48:56.747732 containerd[1468]: time="2026-03-07T01:48:56.747629742Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:48:56.778894 containerd[1468]: time="2026-03-07T01:48:56.776978041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:48:56.779976 containerd[1468]: time="2026-03-07T01:48:56.779887879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.116568112s" Mar 7 01:48:56.784958 containerd[1468]: time="2026-03-07T01:48:56.783345064Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.001507238s" Mar 7 01:48:56.817262 containerd[1468]: time="2026-03-07T01:48:56.816319687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.108382224s" Mar 7 01:48:57.426072 containerd[1468]: time="2026-03-07T01:48:57.411103536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:48:57.426072 containerd[1468]: time="2026-03-07T01:48:57.411189286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:48:57.426072 containerd[1468]: time="2026-03-07T01:48:57.411219716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:48:57.426072 containerd[1468]: time="2026-03-07T01:48:57.411370204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:48:57.449729 containerd[1468]: time="2026-03-07T01:48:57.428712991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:48:57.449729 containerd[1468]: time="2026-03-07T01:48:57.449304150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:48:57.449729 containerd[1468]: time="2026-03-07T01:48:57.449325442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:48:57.449729 containerd[1468]: time="2026-03-07T01:48:57.449454097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:48:57.462446 containerd[1468]: time="2026-03-07T01:48:57.452676196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:48:57.462446 containerd[1468]: time="2026-03-07T01:48:57.452748069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:48:57.462446 containerd[1468]: time="2026-03-07T01:48:57.452783158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:48:57.462446 containerd[1468]: time="2026-03-07T01:48:57.452989778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:48:57.607209 systemd[1]: Started cri-containerd-8b60bbf0e4d2cf9349ee356597c93eae9cb2514c7c1ba21d2c027855d9276d90.scope - libcontainer container 8b60bbf0e4d2cf9349ee356597c93eae9cb2514c7c1ba21d2c027855d9276d90. Mar 7 01:48:57.662255 kubelet[2242]: E0307 01:48:57.661991 2242 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="3.2s" Mar 7 01:48:57.684133 systemd[1]: Started cri-containerd-05694b323495f8a1951e8e8ff1e05af619296fdcf8e90cc7e4a5bb7f5898ac8f.scope - libcontainer container 05694b323495f8a1951e8e8ff1e05af619296fdcf8e90cc7e4a5bb7f5898ac8f. Mar 7 01:48:57.693788 systemd[1]: Started cri-containerd-2ef9be9893e9369351d8f84c56b125da46be4f813026395787026def5c6d1f26.scope - libcontainer container 2ef9be9893e9369351d8f84c56b125da46be4f813026395787026def5c6d1f26. Mar 7 01:48:57.933192 kubelet[2242]: E0307 01:48:57.932389 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6bfc4403707d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:48:54.540718205 +0000 UTC m=+2.538269072,LastTimestamp:2026-03-07 01:48:54.540718205 +0000 UTC m=+2.538269072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:48:57.962046 containerd[1468]: time="2026-03-07T01:48:57.952829493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b60bbf0e4d2cf9349ee356597c93eae9cb2514c7c1ba21d2c027855d9276d90\"" Mar 7 01:48:57.962649 kubelet[2242]: E0307 01:48:57.954870 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:48:58.006831 containerd[1468]: time="2026-03-07T01:48:58.006349582Z" level=info msg="CreateContainer within sandbox \"8b60bbf0e4d2cf9349ee356597c93eae9cb2514c7c1ba21d2c027855d9276d90\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:48:58.066751 containerd[1468]: time="2026-03-07T01:48:58.066244414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d19c38559edea5810548fa678617ca5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ef9be9893e9369351d8f84c56b125da46be4f813026395787026def5c6d1f26\"" Mar 7 01:48:58.086108 kubelet[2242]: E0307 01:48:58.084484 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:48:58.089705 containerd[1468]: time="2026-03-07T01:48:58.089657098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"05694b323495f8a1951e8e8ff1e05af619296fdcf8e90cc7e4a5bb7f5898ac8f\"" Mar 7 01:48:58.099400 kubelet[2242]: E0307 01:48:58.099359 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:48:58.124348 containerd[1468]: time="2026-03-07T01:48:58.124297258Z" level=info msg="CreateContainer within sandbox \"2ef9be9893e9369351d8f84c56b125da46be4f813026395787026def5c6d1f26\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:48:58.139765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024252401.mount: Deactivated successfully. Mar 7 01:48:58.145527 containerd[1468]: time="2026-03-07T01:48:58.145222136Z" level=info msg="CreateContainer within sandbox \"05694b323495f8a1951e8e8ff1e05af619296fdcf8e90cc7e4a5bb7f5898ac8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:48:58.165182 kubelet[2242]: I0307 01:48:58.161251 2242 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:48:58.165182 kubelet[2242]: E0307 01:48:58.162361 2242 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Mar 7 01:48:58.177204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3838147937.mount: Deactivated successfully. Mar 7 01:48:58.260400 containerd[1468]: time="2026-03-07T01:48:58.258228169Z" level=info msg="CreateContainer within sandbox \"2ef9be9893e9369351d8f84c56b125da46be4f813026395787026def5c6d1f26\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f0c158103a4328fed3c24661f6d71f01872363fded2f8fd7ef75d7de959122a\"" Mar 7 01:48:58.265921 containerd[1468]: time="2026-03-07T01:48:58.265176119Z" level=info msg="StartContainer for \"8f0c158103a4328fed3c24661f6d71f01872363fded2f8fd7ef75d7de959122a\"" Mar 7 01:48:58.268953 containerd[1468]: time="2026-03-07T01:48:58.268866001Z" level=info msg="CreateContainer within sandbox \"8b60bbf0e4d2cf9349ee356597c93eae9cb2514c7c1ba21d2c027855d9276d90\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9cdfa0ed7a4692d3b601ba019998b3acdbc200ab484677cb1ff161db9363b174\"" Mar 7 01:48:58.269645 containerd[1468]: time="2026-03-07T01:48:58.269506640Z" level=info msg="StartContainer for \"9cdfa0ed7a4692d3b601ba019998b3acdbc200ab484677cb1ff161db9363b174\"" Mar 7 01:48:58.313178 containerd[1468]: time="2026-03-07T01:48:58.312058284Z" level=info msg="CreateContainer within sandbox \"05694b323495f8a1951e8e8ff1e05af619296fdcf8e90cc7e4a5bb7f5898ac8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ddaa6fa4c9668f7e67482ac50d23e4e2dade66b62f7e33b9899d7d604bb969c\"" Mar 7 01:48:58.314869 containerd[1468]: time="2026-03-07T01:48:58.314834921Z" level=info msg="StartContainer for \"9ddaa6fa4c9668f7e67482ac50d23e4e2dade66b62f7e33b9899d7d604bb969c\"" Mar 7 01:48:58.415517 systemd[1]: Started cri-containerd-8f0c158103a4328fed3c24661f6d71f01872363fded2f8fd7ef75d7de959122a.scope - libcontainer container 8f0c158103a4328fed3c24661f6d71f01872363fded2f8fd7ef75d7de959122a. Mar 7 01:48:58.467042 systemd[1]: Started cri-containerd-9cdfa0ed7a4692d3b601ba019998b3acdbc200ab484677cb1ff161db9363b174.scope - libcontainer container 9cdfa0ed7a4692d3b601ba019998b3acdbc200ab484677cb1ff161db9363b174. Mar 7 01:48:58.510218 systemd[1]: Started cri-containerd-9ddaa6fa4c9668f7e67482ac50d23e4e2dade66b62f7e33b9899d7d604bb969c.scope - libcontainer container 9ddaa6fa4c9668f7e67482ac50d23e4e2dade66b62f7e33b9899d7d604bb969c. Mar 7 01:48:58.960940 containerd[1468]: time="2026-03-07T01:48:58.960680445Z" level=info msg="StartContainer for \"8f0c158103a4328fed3c24661f6d71f01872363fded2f8fd7ef75d7de959122a\" returns successfully" Mar 7 01:48:59.080204 kubelet[2242]: E0307 01:48:59.079728 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:48:59.080204 kubelet[2242]: E0307 01:48:59.079911 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:48:59.172753 containerd[1468]: time="2026-03-07T01:48:59.162697516Z" level=info msg="StartContainer for \"9ddaa6fa4c9668f7e67482ac50d23e4e2dade66b62f7e33b9899d7d604bb969c\" returns successfully" Mar 7 01:48:59.172753 containerd[1468]: time="2026-03-07T01:48:59.163510604Z" level=info msg="StartContainer for \"9cdfa0ed7a4692d3b601ba019998b3acdbc200ab484677cb1ff161db9363b174\" returns successfully" Mar 7 01:49:00.182723 kubelet[2242]: E0307 01:49:00.180056 2242 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:49:00.271710 kubelet[2242]: E0307 01:49:00.271339 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:49:00.271710 kubelet[2242]: E0307 01:49:00.271519 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:00.322024 kubelet[2242]: E0307 01:49:00.318943 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:49:00.322024 kubelet[2242]: E0307 01:49:00.321911 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:00.322908 kubelet[2242]: E0307 01:49:00.322874 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:49:00.328796 kubelet[2242]: E0307 01:49:00.323181 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:01.455743 kubelet[2242]: E0307 01:49:01.454487 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:49:01.455743 kubelet[2242]: E0307 01:49:01.454967 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:01.464244 kubelet[2242]: I0307 01:49:01.464113 2242 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:49:01.466136 kubelet[2242]: E0307 01:49:01.466003 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:49:01.466768 kubelet[2242]: E0307 01:49:01.466746 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:02.470828 kubelet[2242]: E0307 01:49:02.466333 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:49:02.470828 kubelet[2242]: E0307 01:49:02.466868 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:04.081996 kubelet[2242]: E0307 01:49:04.071948 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:49:04.081996 kubelet[2242]: E0307 01:49:04.072488 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:04.716500 kubelet[2242]: E0307 01:49:04.705990 2242 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:49:04.716500 kubelet[2242]: E0307 01:49:04.709154 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:05.013803 kubelet[2242]: E0307 01:49:05.010091 2242 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:49:08.898269 kubelet[2242]: E0307 01:49:08.896629 2242 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 01:49:08.964697 kubelet[2242]: E0307 01:49:08.957897 2242 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6bfc4403707d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:48:54.540718205 +0000 UTC m=+2.538269072,LastTimestamp:2026-03-07 01:48:54.540718205 +0000 UTC m=+2.538269072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:49:09.058117 kubelet[2242]: E0307 01:49:09.057888 2242 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6bfc4b93a5ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:48:54.667609546 +0000 UTC m=+2.665160383,LastTimestamp:2026-03-07 01:48:54.667609546 +0000 UTC m=+2.665160383,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:49:09.180300 kubelet[2242]: E0307 01:49:09.179825 2242 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6bfc5223d0a6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:48:54.777720998 +0000 UTC m=+2.775271826,LastTimestamp:2026-03-07 01:48:54.777720998 +0000 UTC m=+2.775271826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:49:09.234309 kubelet[2242]: I0307 01:49:09.230799 2242 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 01:49:09.317228 kubelet[2242]: E0307 01:49:09.312925 2242 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6bfc5224676b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:48:54.777759595 +0000 UTC m=+2.775310411,LastTimestamp:2026-03-07 01:48:54.777759595 +0000 UTC m=+2.775310411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:49:09.331165 kubelet[2242]: I0307 01:49:09.328281 2242 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:49:09.446753 kubelet[2242]: E0307 01:49:09.437686 2242 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 7 01:49:09.446753 kubelet[2242]: I0307 01:49:09.437790 2242 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:49:09.472757 kubelet[2242]: E0307 01:49:09.458818 2242 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 7 01:49:09.472757 kubelet[2242]: I0307 01:49:09.464823 2242 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:49:09.483468 kubelet[2242]: I0307 01:49:09.483382 2242 apiserver.go:52] "Watching apiserver" Mar 7 01:49:09.503926 kubelet[2242]: E0307 01:49:09.502154 2242 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:49:09.628876 kubelet[2242]: I0307 01:49:09.628710 2242 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:49:11.226796 kubelet[2242]: I0307 01:49:11.220196 2242 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:49:11.276982 kubelet[2242]: E0307 01:49:11.276907 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:11.604165 kubelet[2242]: E0307 01:49:11.604007 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:15.464520 kubelet[2242]: I0307 01:49:15.464102 2242 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:49:16.058106 kubelet[2242]: E0307 01:49:16.045930 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:17.625040 kubelet[2242]: E0307 01:49:17.624256 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:17.739433 kubelet[2242]: I0307 01:49:17.734926 2242 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.734898701 podStartE2EDuration="6.734898701s" podCreationTimestamp="2026-03-07 01:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:49:16.331933207 +0000 UTC m=+24.329484054" watchObservedRunningTime="2026-03-07 01:49:17.734898701 +0000 UTC m=+25.732449518" Mar 7 01:49:17.739433 kubelet[2242]: I0307 01:49:17.735325 2242 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.735313986 podStartE2EDuration="2.735313986s" podCreationTimestamp="2026-03-07 01:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:49:17.73405927 +0000 UTC m=+25.731610117" watchObservedRunningTime="2026-03-07 01:49:17.735313986 +0000 UTC m=+25.732864803" Mar 7 01:49:20.064842 kubelet[2242]: E0307 01:49:20.062149 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:23.988692 kubelet[2242]: I0307 01:49:23.986255 2242 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:49:24.251208 kubelet[2242]: E0307 01:49:24.240785 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:25.266820 kubelet[2242]: E0307 01:49:25.264004 2242 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:25.340470 kubelet[2242]: I0307 01:49:25.338911 2242 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.338891496 podStartE2EDuration="1.338891496s" podCreationTimestamp="2026-03-07 01:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:49:25.336270511 +0000 UTC m=+33.333821338" watchObservedRunningTime="2026-03-07 01:49:25.338891496 +0000 UTC m=+33.336442313" Mar 7 01:49:25.715498 systemd[1]: Reloading requested from client PID 2538 ('systemctl') (unit session-7.scope)... Mar 7 01:49:25.715522 systemd[1]: Reloading... Mar 7 01:49:26.536817 zram_generator::config[2580]: No configuration found. Mar 7 01:49:27.559497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:49:27.833977 systemd[1]: Reloading finished in 2115 ms. Mar 7 01:49:28.099429 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:49:28.156089 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:49:28.156449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:49:28.156727 systemd[1]: kubelet.service: Consumed 9.139s CPU time, 132.0M memory peak, 0B memory swap peak. Mar 7 01:49:28.221294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:49:28.994141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:49:28.995798 (kubelet)[2622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:49:29.301074 kubelet[2622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:49:29.366707 kubelet[2622]: I0307 01:49:29.366159 2622 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:49:29.366707 kubelet[2622]: I0307 01:49:29.366485 2622 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:49:29.366707 kubelet[2622]: I0307 01:49:29.366518 2622 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:49:29.366707 kubelet[2622]: I0307 01:49:29.366622 2622 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:49:29.384061 kubelet[2622]: I0307 01:49:29.383821 2622 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:49:29.389744 kubelet[2622]: I0307 01:49:29.389244 2622 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:49:29.455988 kubelet[2622]: I0307 01:49:29.455358 2622 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:49:29.489882 kubelet[2622]: E0307 01:49:29.485697 2622 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:49:29.489882 kubelet[2622]: I0307 01:49:29.488968 2622 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:49:29.542804 kubelet[2622]: I0307 01:49:29.541790 2622 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:49:29.544729 kubelet[2622]: I0307 01:49:29.544284 2622 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:49:29.544994 kubelet[2622]: I0307 01:49:29.544444 2622 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:49:29.544994 kubelet[2622]: I0307 01:49:29.544768 2622 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:49:29.544994 kubelet[2622]: I0307 01:49:29.544782 2622 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:49:29.544994 kubelet[2622]: I0307 01:49:29.544815 2622 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:49:29.545741 kubelet[2622]: I0307 01:49:29.545657 2622 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:49:29.558821 kubelet[2622]: I0307 01:49:29.546211 2622 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:49:29.558821 kubelet[2622]: I0307 01:49:29.546282 2622 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:49:29.558821 kubelet[2622]: I0307 01:49:29.546314 2622 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:49:29.558821 kubelet[2622]: I0307 01:49:29.546328 2622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:49:29.558821 kubelet[2622]: I0307 01:49:29.557339 2622 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:49:29.564883 kubelet[2622]: I0307 01:49:29.564017 2622 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:49:29.564883 kubelet[2622]: I0307 01:49:29.564080 2622 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:49:29.759962 kubelet[2622]: I0307 01:49:29.736140 2622 server.go:1257] "Started kubelet" Mar 7 01:49:29.759962 kubelet[2622]: I0307 01:49:29.737900 2622 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:49:29.759962 kubelet[2622]: I0307 01:49:29.737972 2622 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:49:29.759962 kubelet[2622]: I0307 01:49:29.746705 2622 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:49:29.759962 kubelet[2622]: I0307 01:49:29.746824 2622 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:49:29.841382 kubelet[2622]: I0307 01:49:29.840135 2622 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:49:29.878916 kubelet[2622]: I0307 01:49:29.878318 2622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:49:29.900839 kubelet[2622]: I0307 01:49:29.895053 2622 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:49:29.909919 kubelet[2622]: I0307 01:49:29.908033 2622 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:49:29.909919 kubelet[2622]: I0307 01:49:29.908252 2622 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:49:29.909919 kubelet[2622]: I0307 01:49:29.908711 2622 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:49:29.909919 kubelet[2622]: I0307 01:49:29.909461 2622 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:49:29.909919 kubelet[2622]: I0307 01:49:29.909823 2622 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:49:29.959796 kubelet[2622]: I0307 01:49:29.959187 2622 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:49:31.146098 kubelet[2622]: I0307 01:49:31.135851 2622 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:49:31.187778 kubelet[2622]: I0307 01:49:31.184445 2622 apiserver.go:52] "Watching apiserver" Mar 7 01:49:31.339710 kubelet[2622]: I0307 01:49:31.338883 2622 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:49:31.339710 kubelet[2622]: I0307 01:49:31.338939 2622 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:49:31.339710 kubelet[2622]: I0307 01:49:31.338973 2622 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:49:31.339710 kubelet[2622]: E0307 01:49:31.339068 2622 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:49:31.481708 kubelet[2622]: E0307 01:49:31.464814 2622 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:49:31.671608 kubelet[2622]: E0307 01:49:31.668887 2622 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.858635 2622 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.858660 2622 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.858849 2622 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.859243 2622 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.859262 2622 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.859306 2622 policy_none.go:50] "Start" Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.859322 2622 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.859336 2622 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.859476 2622 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:49:31.859639 kubelet[2622]: I0307 01:49:31.859486 2622 policy_none.go:44] "Start" Mar 7 01:49:31.966615 kubelet[2622]: E0307 01:49:31.966399 2622 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:49:31.967258 kubelet[2622]: I0307 01:49:31.966964 2622 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:49:31.967258 kubelet[2622]: I0307 01:49:31.966983 2622 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:49:31.975168 kubelet[2622]: I0307 01:49:31.971303 2622 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:49:32.038771 kubelet[2622]: E0307 01:49:32.033459 2622 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:49:32.120641 kubelet[2622]: I0307 01:49:32.112335 2622 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:49:32.315835 kubelet[2622]: I0307 01:49:32.310240 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:49:32.315835 kubelet[2622]: I0307 01:49:32.310617 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:49:32.315835 kubelet[2622]: I0307 01:49:32.310663 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:49:32.315835 kubelet[2622]: I0307 01:49:32.314075 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:49:32.315835 kubelet[2622]: I0307 01:49:32.314189 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:49:32.317049 kubelet[2622]: I0307 01:49:32.314315 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d19c38559edea5810548fa678617ca5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d19c38559edea5810548fa678617ca5\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:49:32.317049 kubelet[2622]: I0307 01:49:32.314405 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:49:32.317049 kubelet[2622]: I0307 01:49:32.314436 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d19c38559edea5810548fa678617ca5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d19c38559edea5810548fa678617ca5\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:49:32.317049 kubelet[2622]: I0307 01:49:32.314619 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d19c38559edea5810548fa678617ca5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d19c38559edea5810548fa678617ca5\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:49:32.373073 kubelet[2622]: I0307 01:49:32.371983 2622 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:49:32.524602 kubelet[2622]: I0307 01:49:32.516777 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66-kube-proxy\") pod \"kube-proxy-ssdr4\" (UID: \"3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66\") " pod="kube-system/kube-proxy-ssdr4" Mar 7 01:49:32.524602 kubelet[2622]: I0307 01:49:32.517245 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vpvp\" (UniqueName: \"kubernetes.io/projected/3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66-kube-api-access-4vpvp\") pod \"kube-proxy-ssdr4\" (UID: \"3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66\") " pod="kube-system/kube-proxy-ssdr4" Mar 7 01:49:32.524602 kubelet[2622]: I0307 01:49:32.517354 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66-xtables-lock\") pod \"kube-proxy-ssdr4\" (UID: \"3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66\") " pod="kube-system/kube-proxy-ssdr4" Mar 7 01:49:33.057153 systemd[1]: Created slice kubepods-besteffort-pod3cb7aa2b_7e4f_417b_8e68_e3c380b5ff66.slice - libcontainer container kubepods-besteffort-pod3cb7aa2b_7e4f_417b_8e68_e3c380b5ff66.slice. Mar 7 01:49:33.075649 kubelet[2622]: I0307 01:49:32.517375 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66-lib-modules\") pod \"kube-proxy-ssdr4\" (UID: \"3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66\") " pod="kube-system/kube-proxy-ssdr4" Mar 7 01:49:33.088178 kubelet[2622]: E0307 01:49:33.088142 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:33.090929 kubelet[2622]: E0307 01:49:33.088702 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:33.093904 kubelet[2622]: E0307 01:49:33.089019 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:33.184139 kubelet[2622]: I0307 01:49:33.184100 2622 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 7 01:49:33.184436 kubelet[2622]: I0307 01:49:33.184418 2622 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 01:49:33.185383 kubelet[2622]: I0307 01:49:33.184683 2622 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:49:33.213654 containerd[1468]: time="2026-03-07T01:49:33.213108781Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:49:33.214649 kubelet[2622]: I0307 01:49:33.214619 2622 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:49:33.871352 kubelet[2622]: E0307 01:49:33.866720 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:33.955252 containerd[1468]: time="2026-03-07T01:49:33.939246970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ssdr4,Uid:3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66,Namespace:kube-system,Attempt:0,}" Mar 7 01:49:34.146018 kubelet[2622]: E0307 01:49:34.144376 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:34.152654 kubelet[2622]: E0307 01:49:34.149485 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:35.025037 containerd[1468]: time="2026-03-07T01:49:35.022519402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:49:35.025037 containerd[1468]: time="2026-03-07T01:49:35.023092070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:49:35.025037 containerd[1468]: time="2026-03-07T01:49:35.023118259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:49:35.025037 containerd[1468]: time="2026-03-07T01:49:35.023322981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:49:35.921699 systemd[1]: run-containerd-runc-k8s.io-35e2d28087c312364cea0779ca4839818fa9471e8790b5e296322fe5accc28f8-runc.hrz4yi.mount: Deactivated successfully. Mar 7 01:49:35.956821 systemd[1]: Started cri-containerd-35e2d28087c312364cea0779ca4839818fa9471e8790b5e296322fe5accc28f8.scope - libcontainer container 35e2d28087c312364cea0779ca4839818fa9471e8790b5e296322fe5accc28f8. Mar 7 01:49:37.279029 containerd[1468]: time="2026-03-07T01:49:37.271008565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ssdr4,Uid:3cb7aa2b-7e4f-417b-8e68-e3c380b5ff66,Namespace:kube-system,Attempt:0,} returns sandbox id \"35e2d28087c312364cea0779ca4839818fa9471e8790b5e296322fe5accc28f8\"" Mar 7 01:49:37.315908 kubelet[2622]: E0307 01:49:37.310231 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:37.362928 containerd[1468]: time="2026-03-07T01:49:37.362664550Z" level=info msg="CreateContainer within sandbox \"35e2d28087c312364cea0779ca4839818fa9471e8790b5e296322fe5accc28f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:49:37.610894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279727430.mount: Deactivated successfully. Mar 7 01:49:37.664198 containerd[1468]: time="2026-03-07T01:49:37.660845412Z" level=info msg="CreateContainer within sandbox \"35e2d28087c312364cea0779ca4839818fa9471e8790b5e296322fe5accc28f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc568904d463838381ffd81a5c3c0fa5f0637db52fe759c90499aa6305fa9a58\"" Mar 7 01:49:37.676194 containerd[1468]: time="2026-03-07T01:49:37.673264654Z" level=info msg="StartContainer for \"bc568904d463838381ffd81a5c3c0fa5f0637db52fe759c90499aa6305fa9a58\"" Mar 7 01:49:38.413742 systemd[1]: Started cri-containerd-bc568904d463838381ffd81a5c3c0fa5f0637db52fe759c90499aa6305fa9a58.scope - libcontainer container bc568904d463838381ffd81a5c3c0fa5f0637db52fe759c90499aa6305fa9a58. Mar 7 01:49:39.913524 containerd[1468]: time="2026-03-07T01:49:39.908949860Z" level=info msg="StartContainer for \"bc568904d463838381ffd81a5c3c0fa5f0637db52fe759c90499aa6305fa9a58\" returns successfully" Mar 7 01:49:40.305761 kubelet[2622]: E0307 01:49:40.302634 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:41.365956 kubelet[2622]: E0307 01:49:41.365656 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:43.684313 kubelet[2622]: I0307 01:49:43.680149 2622 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-ssdr4" podStartSLOduration=12.680125178 podStartE2EDuration="12.680125178s" podCreationTimestamp="2026-03-07 01:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:49:40.379703964 +0000 UTC m=+11.349603497" watchObservedRunningTime="2026-03-07 01:49:43.680125178 +0000 UTC m=+14.650024691" Mar 7 01:49:43.890505 kubelet[2622]: I0307 01:49:43.878779 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svk5x\" (UniqueName: \"kubernetes.io/projected/c2d9496b-b862-405f-b102-6e01620c2706-kube-api-access-svk5x\") pod \"kube-flannel-ds-zssc6\" (UID: \"c2d9496b-b862-405f-b102-6e01620c2706\") " pod="kube-flannel/kube-flannel-ds-zssc6" Mar 7 01:49:43.890505 kubelet[2622]: I0307 01:49:43.879941 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c2d9496b-b862-405f-b102-6e01620c2706-run\") pod \"kube-flannel-ds-zssc6\" (UID: \"c2d9496b-b862-405f-b102-6e01620c2706\") " pod="kube-flannel/kube-flannel-ds-zssc6" Mar 7 01:49:43.890505 kubelet[2622]: I0307 01:49:43.880257 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/c2d9496b-b862-405f-b102-6e01620c2706-cni-plugin\") pod \"kube-flannel-ds-zssc6\" (UID: \"c2d9496b-b862-405f-b102-6e01620c2706\") " pod="kube-flannel/kube-flannel-ds-zssc6" Mar 7 01:49:43.890505 kubelet[2622]: I0307 01:49:43.881115 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2d9496b-b862-405f-b102-6e01620c2706-xtables-lock\") pod \"kube-flannel-ds-zssc6\" (UID: \"c2d9496b-b862-405f-b102-6e01620c2706\") " pod="kube-flannel/kube-flannel-ds-zssc6" Mar 7 01:49:43.890505 kubelet[2622]: I0307 01:49:43.882235 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/c2d9496b-b862-405f-b102-6e01620c2706-cni\") pod \"kube-flannel-ds-zssc6\" (UID: \"c2d9496b-b862-405f-b102-6e01620c2706\") " pod="kube-flannel/kube-flannel-ds-zssc6" Mar 7 01:49:43.894026 kubelet[2622]: I0307 01:49:43.883843 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/c2d9496b-b862-405f-b102-6e01620c2706-flannel-cfg\") pod \"kube-flannel-ds-zssc6\" (UID: \"c2d9496b-b862-405f-b102-6e01620c2706\") " pod="kube-flannel/kube-flannel-ds-zssc6" Mar 7 01:49:43.891142 systemd[1]: Created slice kubepods-burstable-podc2d9496b_b862_405f_b102_6e01620c2706.slice - libcontainer container kubepods-burstable-podc2d9496b_b862_405f_b102_6e01620c2706.slice. Mar 7 01:49:44.566412 kubelet[2622]: E0307 01:49:44.559516 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:44.584786 containerd[1468]: time="2026-03-07T01:49:44.571421165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zssc6,Uid:c2d9496b-b862-405f-b102-6e01620c2706,Namespace:kube-flannel,Attempt:0,}" Mar 7 01:49:45.036829 containerd[1468]: time="2026-03-07T01:49:45.035793814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:49:45.036829 containerd[1468]: time="2026-03-07T01:49:45.035863467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:49:45.036829 containerd[1468]: time="2026-03-07T01:49:45.035882093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:49:45.036829 containerd[1468]: time="2026-03-07T01:49:45.035985519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:49:45.050813 sudo[1618]: pam_unix(sudo:session): session closed for user root Mar 7 01:49:45.064200 sshd[1615]: pam_unix(sshd:session): session closed for user core Mar 7 01:49:45.078251 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:34090.service: Deactivated successfully. Mar 7 01:49:45.088651 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:49:45.090222 systemd[1]: session-7.scope: Consumed 13.153s CPU time, 164.0M memory peak, 0B memory swap peak. Mar 7 01:49:45.112497 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:49:45.120767 systemd-logind[1455]: Removed session 7. Mar 7 01:49:45.219645 systemd[1]: Started cri-containerd-053b596f82be03873f44d5a94208c3a2af03ba79a10782284cfd8ba00ddfe9fd.scope - libcontainer container 053b596f82be03873f44d5a94208c3a2af03ba79a10782284cfd8ba00ddfe9fd. Mar 7 01:49:45.398282 containerd[1468]: time="2026-03-07T01:49:45.398020401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zssc6,Uid:c2d9496b-b862-405f-b102-6e01620c2706,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"053b596f82be03873f44d5a94208c3a2af03ba79a10782284cfd8ba00ddfe9fd\"" Mar 7 01:49:45.407739 kubelet[2622]: E0307 01:49:45.405021 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:45.413879 containerd[1468]: time="2026-03-07T01:49:45.413820358Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 7 01:49:47.481285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258061609.mount: Deactivated successfully. Mar 7 01:49:47.928851 containerd[1468]: time="2026-03-07T01:49:47.927075436Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:47.935840 containerd[1468]: time="2026-03-07T01:49:47.931927678Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 7 01:49:47.941947 containerd[1468]: time="2026-03-07T01:49:47.938151948Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:47.960664 containerd[1468]: time="2026-03-07T01:49:47.946236554Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 2.532364898s" Mar 7 01:49:47.960664 containerd[1468]: time="2026-03-07T01:49:47.952876236Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 7 01:49:47.960664 containerd[1468]: time="2026-03-07T01:49:47.953819574Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:49:48.026025 containerd[1468]: time="2026-03-07T01:49:48.018733799Z" level=info msg="CreateContainer within sandbox \"053b596f82be03873f44d5a94208c3a2af03ba79a10782284cfd8ba00ddfe9fd\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 7 01:49:48.180902 containerd[1468]: time="2026-03-07T01:49:48.177736456Z" level=info msg="CreateContainer within sandbox \"053b596f82be03873f44d5a94208c3a2af03ba79a10782284cfd8ba00ddfe9fd\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"27dcb6e58a263143d8a83de8eaaff8950822bf3c4a279a8fdc94008c81526601\"" Mar 7 01:49:48.180902 containerd[1468]: time="2026-03-07T01:49:48.179729516Z" level=info msg="StartContainer for \"27dcb6e58a263143d8a83de8eaaff8950822bf3c4a279a8fdc94008c81526601\"" Mar 7 01:49:48.371843 systemd[1]: Started cri-containerd-27dcb6e58a263143d8a83de8eaaff8950822bf3c4a279a8fdc94008c81526601.scope - libcontainer container 27dcb6e58a263143d8a83de8eaaff8950822bf3c4a279a8fdc94008c81526601. Mar 7 01:49:48.559670 containerd[1468]: time="2026-03-07T01:49:48.558452765Z" level=info msg="StartContainer for \"27dcb6e58a263143d8a83de8eaaff8950822bf3c4a279a8fdc94008c81526601\" returns successfully" Mar 7 01:49:48.580721 systemd[1]: cri-containerd-27dcb6e58a263143d8a83de8eaaff8950822bf3c4a279a8fdc94008c81526601.scope: Deactivated successfully. Mar 7 01:49:48.733470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27dcb6e58a263143d8a83de8eaaff8950822bf3c4a279a8fdc94008c81526601-rootfs.mount: Deactivated successfully. Mar 7 01:49:48.787960 kubelet[2622]: E0307 01:49:48.786287 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:48.824651 containerd[1468]: time="2026-03-07T01:49:48.824391200Z" level=info msg="shim disconnected" id=27dcb6e58a263143d8a83de8eaaff8950822bf3c4a279a8fdc94008c81526601 namespace=k8s.io Mar 7 01:49:48.824651 containerd[1468]: time="2026-03-07T01:49:48.824596411Z" level=warning msg="cleaning up after shim disconnected" id=27dcb6e58a263143d8a83de8eaaff8950822bf3c4a279a8fdc94008c81526601 namespace=k8s.io Mar 7 01:49:48.824651 containerd[1468]: time="2026-03-07T01:49:48.824616069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:49:49.938651 kubelet[2622]: E0307 01:49:49.933948 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:49:49.949779 containerd[1468]: time="2026-03-07T01:49:49.943679897Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 7 01:50:02.866441 containerd[1468]: time="2026-03-07T01:50:02.864241368Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:50:02.868284 containerd[1468]: time="2026-03-07T01:50:02.867863826Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 7 01:50:02.879640 containerd[1468]: time="2026-03-07T01:50:02.876277373Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:50:02.903264 containerd[1468]: time="2026-03-07T01:50:02.902874435Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:50:02.906803 containerd[1468]: time="2026-03-07T01:50:02.906395611Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 12.962646111s" Mar 7 01:50:02.906803 containerd[1468]: time="2026-03-07T01:50:02.906490511Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 7 01:50:02.955360 containerd[1468]: time="2026-03-07T01:50:02.954979513Z" level=info msg="CreateContainer within sandbox \"053b596f82be03873f44d5a94208c3a2af03ba79a10782284cfd8ba00ddfe9fd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:50:03.053766 containerd[1468]: time="2026-03-07T01:50:03.053710590Z" level=info msg="CreateContainer within sandbox \"053b596f82be03873f44d5a94208c3a2af03ba79a10782284cfd8ba00ddfe9fd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0\"" Mar 7 01:50:03.061327 containerd[1468]: time="2026-03-07T01:50:03.057915787Z" level=info msg="StartContainer for \"527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0\"" Mar 7 01:50:03.730805 systemd[1]: run-containerd-runc-k8s.io-527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0-runc.PQZkt4.mount: Deactivated successfully. Mar 7 01:50:03.809508 systemd[1]: Started cri-containerd-527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0.scope - libcontainer container 527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0. Mar 7 01:50:04.220729 systemd[1]: cri-containerd-527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0.scope: Deactivated successfully. Mar 7 01:50:04.256262 containerd[1468]: time="2026-03-07T01:50:04.244011194Z" level=info msg="StartContainer for \"527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0\" returns successfully" Mar 7 01:50:04.318820 kubelet[2622]: I0307 01:50:04.318782 2622 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 01:50:04.353844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0-rootfs.mount: Deactivated successfully. Mar 7 01:50:04.776339 systemd[1]: Created slice kubepods-burstable-pod920ee015_9421_4414_8db8_245fb0cf77ac.slice - libcontainer container kubepods-burstable-pod920ee015_9421_4414_8db8_245fb0cf77ac.slice. Mar 7 01:50:04.825962 kubelet[2622]: I0307 01:50:04.822970 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920ee015-9421-4414-8db8-245fb0cf77ac-config-volume\") pod \"coredns-7d764666f9-6kp77\" (UID: \"920ee015-9421-4414-8db8-245fb0cf77ac\") " pod="kube-system/coredns-7d764666f9-6kp77" Mar 7 01:50:04.825962 kubelet[2622]: I0307 01:50:04.823074 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nksjf\" (UniqueName: \"kubernetes.io/projected/920ee015-9421-4414-8db8-245fb0cf77ac-kube-api-access-nksjf\") pod \"coredns-7d764666f9-6kp77\" (UID: \"920ee015-9421-4414-8db8-245fb0cf77ac\") " pod="kube-system/coredns-7d764666f9-6kp77" Mar 7 01:50:04.845654 kubelet[2622]: E0307 01:50:04.835197 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:04.995513 systemd[1]: Created slice kubepods-burstable-pod32b9a46a_da94_4354_9286_87a0a76c1c8d.slice - libcontainer container kubepods-burstable-pod32b9a46a_da94_4354_9286_87a0a76c1c8d.slice. Mar 7 01:50:05.056272 containerd[1468]: time="2026-03-07T01:50:05.056207061Z" level=info msg="shim disconnected" id=527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0 namespace=k8s.io Mar 7 01:50:05.060038 containerd[1468]: time="2026-03-07T01:50:05.059999571Z" level=warning msg="cleaning up after shim disconnected" id=527044a84c3a7e4363bea92b182a9ed8012783ad02565b664c680d39898d5ad0 namespace=k8s.io Mar 7 01:50:05.065487 containerd[1468]: time="2026-03-07T01:50:05.061777245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:50:05.077498 kubelet[2622]: I0307 01:50:05.077455 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32b9a46a-da94-4354-9286-87a0a76c1c8d-config-volume\") pod \"coredns-7d764666f9-hwvkx\" (UID: \"32b9a46a-da94-4354-9286-87a0a76c1c8d\") " pod="kube-system/coredns-7d764666f9-hwvkx" Mar 7 01:50:05.078276 kubelet[2622]: I0307 01:50:05.077930 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5cs\" (UniqueName: \"kubernetes.io/projected/32b9a46a-da94-4354-9286-87a0a76c1c8d-kube-api-access-lp5cs\") pod \"coredns-7d764666f9-hwvkx\" (UID: \"32b9a46a-da94-4354-9286-87a0a76c1c8d\") " pod="kube-system/coredns-7d764666f9-hwvkx" Mar 7 01:50:05.184452 kubelet[2622]: E0307 01:50:05.183957 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:05.256513 containerd[1468]: time="2026-03-07T01:50:05.251283477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6kp77,Uid:920ee015-9421-4414-8db8-245fb0cf77ac,Namespace:kube-system,Attempt:0,}" Mar 7 01:50:05.483725 containerd[1468]: time="2026-03-07T01:50:05.476507175Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:50:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:50:05.677320 systemd[1]: run-netns-cni\x2dca11d326\x2dcf2e\x2d90b2\x2d703b\x2d3c4f7074c600.mount: Deactivated successfully. Mar 7 01:50:05.732608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-030e3f0733f96eafe1c3bfced45233073e779ae1bf2ea53759c513d7b3061c2e-shm.mount: Deactivated successfully. Mar 7 01:50:05.752051 kubelet[2622]: E0307 01:50:05.751711 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:05.769499 containerd[1468]: time="2026-03-07T01:50:05.769055177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hwvkx,Uid:32b9a46a-da94-4354-9286-87a0a76c1c8d,Namespace:kube-system,Attempt:0,}" Mar 7 01:50:05.800048 containerd[1468]: time="2026-03-07T01:50:05.783764492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6kp77,Uid:920ee015-9421-4414-8db8-245fb0cf77ac,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"030e3f0733f96eafe1c3bfced45233073e779ae1bf2ea53759c513d7b3061c2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:50:05.800262 kubelet[2622]: E0307 01:50:05.791815 2622 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"030e3f0733f96eafe1c3bfced45233073e779ae1bf2ea53759c513d7b3061c2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:50:05.800262 kubelet[2622]: E0307 01:50:05.792116 2622 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"030e3f0733f96eafe1c3bfced45233073e779ae1bf2ea53759c513d7b3061c2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-6kp77" Mar 7 01:50:05.800262 kubelet[2622]: E0307 01:50:05.794004 2622 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"030e3f0733f96eafe1c3bfced45233073e779ae1bf2ea53759c513d7b3061c2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-6kp77" Mar 7 01:50:05.800262 kubelet[2622]: E0307 01:50:05.794369 2622 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-6kp77_kube-system(920ee015-9421-4414-8db8-245fb0cf77ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-6kp77_kube-system(920ee015-9421-4414-8db8-245fb0cf77ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"030e3f0733f96eafe1c3bfced45233073e779ae1bf2ea53759c513d7b3061c2e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-6kp77" podUID="920ee015-9421-4414-8db8-245fb0cf77ac" Mar 7 01:50:05.963086 kubelet[2622]: E0307 01:50:05.962214 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:06.090439 containerd[1468]: time="2026-03-07T01:50:06.090389362Z" level=info msg="CreateContainer within sandbox \"053b596f82be03873f44d5a94208c3a2af03ba79a10782284cfd8ba00ddfe9fd\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 7 01:50:06.322845 containerd[1468]: time="2026-03-07T01:50:06.321768716Z" level=info msg="CreateContainer within sandbox \"053b596f82be03873f44d5a94208c3a2af03ba79a10782284cfd8ba00ddfe9fd\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"62f61cb93189467687d587459e130e1799b5ff5ee60d3e237a633be564efed99\"" Mar 7 01:50:06.347440 containerd[1468]: time="2026-03-07T01:50:06.328036843Z" level=info msg="StartContainer for \"62f61cb93189467687d587459e130e1799b5ff5ee60d3e237a633be564efed99\"" Mar 7 01:50:06.421324 containerd[1468]: time="2026-03-07T01:50:06.419231476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hwvkx,Uid:32b9a46a-da94-4354-9286-87a0a76c1c8d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f7bd21c7e39dda324cb699cfc36fe8747bbdb1abf393fd0c214e789bb88160e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:50:06.421811 kubelet[2622]: E0307 01:50:06.421767 2622 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f7bd21c7e39dda324cb699cfc36fe8747bbdb1abf393fd0c214e789bb88160e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 7 01:50:06.424593 kubelet[2622]: E0307 01:50:06.424474 2622 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f7bd21c7e39dda324cb699cfc36fe8747bbdb1abf393fd0c214e789bb88160e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-hwvkx" Mar 7 01:50:06.424928 kubelet[2622]: E0307 01:50:06.424779 2622 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f7bd21c7e39dda324cb699cfc36fe8747bbdb1abf393fd0c214e789bb88160e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7d764666f9-hwvkx" Mar 7 01:50:06.426626 kubelet[2622]: E0307 01:50:06.426508 2622 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-hwvkx_kube-system(32b9a46a-da94-4354-9286-87a0a76c1c8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-hwvkx_kube-system(32b9a46a-da94-4354-9286-87a0a76c1c8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f7bd21c7e39dda324cb699cfc36fe8747bbdb1abf393fd0c214e789bb88160e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7d764666f9-hwvkx" podUID="32b9a46a-da94-4354-9286-87a0a76c1c8d" Mar 7 01:50:06.458526 systemd[1]: run-netns-cni\x2d219e3a89\x2d3969\x2df836\x2d9eea\x2db1e2ced5e25d.mount: Deactivated successfully. Mar 7 01:50:06.458818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f7bd21c7e39dda324cb699cfc36fe8747bbdb1abf393fd0c214e789bb88160e-shm.mount: Deactivated successfully. Mar 7 01:50:06.508817 systemd[1]: Started cri-containerd-62f61cb93189467687d587459e130e1799b5ff5ee60d3e237a633be564efed99.scope - libcontainer container 62f61cb93189467687d587459e130e1799b5ff5ee60d3e237a633be564efed99. Mar 7 01:50:06.617684 containerd[1468]: time="2026-03-07T01:50:06.614629624Z" level=info msg="StartContainer for \"62f61cb93189467687d587459e130e1799b5ff5ee60d3e237a633be564efed99\" returns successfully" Mar 7 01:50:07.024120 kubelet[2622]: E0307 01:50:07.016911 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:07.118603 kubelet[2622]: I0307 01:50:07.117353 2622 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-zssc6" podStartSLOduration=3.555074307 podStartE2EDuration="24.117333497s" podCreationTimestamp="2026-03-07 01:49:43 +0000 UTC" firstStartedPulling="2026-03-07 01:49:45.412489889 +0000 UTC m=+16.382389401" lastFinishedPulling="2026-03-07 01:50:05.974749078 +0000 UTC m=+36.944648591" observedRunningTime="2026-03-07 01:50:07.107784024 +0000 UTC m=+38.077683566" watchObservedRunningTime="2026-03-07 01:50:07.117333497 +0000 UTC m=+38.087233010" Mar 7 01:50:08.052821 kubelet[2622]: E0307 01:50:08.051757 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:08.096134 systemd-networkd[1398]: flannel.1: Link UP Mar 7 01:50:08.096147 systemd-networkd[1398]: flannel.1: Gained carrier Mar 7 01:50:09.657645 systemd-networkd[1398]: flannel.1: Gained IPv6LL Mar 7 01:50:10.255570 update_engine[1459]: I20260307 01:50:10.250690 1459 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 01:50:10.255570 update_engine[1459]: I20260307 01:50:10.250813 1459 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 01:50:10.255570 update_engine[1459]: I20260307 01:50:10.251202 1459 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 01:50:10.269798 update_engine[1459]: I20260307 01:50:10.265712 1459 omaha_request_params.cc:62] Current group set to lts Mar 7 01:50:10.269798 update_engine[1459]: I20260307 01:50:10.265911 1459 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 01:50:10.269798 update_engine[1459]: I20260307 01:50:10.265933 1459 update_attempter.cc:643] Scheduling an action processor start. Mar 7 01:50:10.269798 update_engine[1459]: I20260307 01:50:10.265961 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:50:10.269798 update_engine[1459]: I20260307 01:50:10.266017 1459 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 01:50:10.269798 update_engine[1459]: I20260307 01:50:10.266139 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:50:10.269798 update_engine[1459]: I20260307 01:50:10.266156 1459 omaha_request_action.cc:272] Request: Mar 7 01:50:10.269798 update_engine[1459]: Mar 7 01:50:10.269798 update_engine[1459]: Mar 7 01:50:10.269798 update_engine[1459]: Mar 7 01:50:10.269798 update_engine[1459]: Mar 7 01:50:10.269798 update_engine[1459]: Mar 7 01:50:10.269798 update_engine[1459]: Mar 7 01:50:10.269798 update_engine[1459]: Mar 7 01:50:10.269798 update_engine[1459]: Mar 7 01:50:10.269798 update_engine[1459]: I20260307 01:50:10.266170 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:50:10.273039 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 01:50:10.274260 update_engine[1459]: I20260307 01:50:10.271432 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:50:10.274260 update_engine[1459]: I20260307 01:50:10.271888 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:50:10.308431 update_engine[1459]: E20260307 01:50:10.307477 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:50:10.308431 update_engine[1459]: I20260307 01:50:10.308096 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 01:50:19.351729 kubelet[2622]: E0307 01:50:19.351406 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:19.358262 containerd[1468]: time="2026-03-07T01:50:19.353919298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hwvkx,Uid:32b9a46a-da94-4354-9286-87a0a76c1c8d,Namespace:kube-system,Attempt:0,}" Mar 7 01:50:19.621981 systemd-networkd[1398]: cni0: Link UP Mar 7 01:50:19.621993 systemd-networkd[1398]: cni0: Gained carrier Mar 7 01:50:19.637474 systemd-networkd[1398]: cni0: Lost carrier Mar 7 01:50:19.709763 systemd-networkd[1398]: veth71bc6c0b: Link UP Mar 7 01:50:19.732944 kernel: cni0: port 1(veth71bc6c0b) entered blocking state Mar 7 01:50:19.733682 kernel: cni0: port 1(veth71bc6c0b) entered disabled state Mar 7 01:50:19.733743 kernel: veth71bc6c0b: entered allmulticast mode Mar 7 01:50:19.745806 kernel: veth71bc6c0b: entered promiscuous mode Mar 7 01:50:19.771770 kernel: cni0: port 1(veth71bc6c0b) entered blocking state Mar 7 01:50:19.771899 kernel: cni0: port 1(veth71bc6c0b) entered forwarding state Mar 7 01:50:19.817712 kernel: cni0: port 1(veth71bc6c0b) entered disabled state Mar 7 01:50:19.877932 kernel: cni0: port 1(veth71bc6c0b) entered blocking state Mar 7 01:50:19.880475 kernel: cni0: port 1(veth71bc6c0b) entered forwarding state Mar 7 01:50:19.883290 systemd-networkd[1398]: veth71bc6c0b: Gained carrier Mar 7 01:50:19.890781 systemd-networkd[1398]: cni0: Gained carrier Mar 7 01:50:19.941189 containerd[1468]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Mar 7 01:50:19.941189 containerd[1468]: delegateAdd: netconf sent to delegate plugin: Mar 7 01:50:20.156076 containerd[1468]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-07T01:50:20.146069413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:50:20.156076 containerd[1468]: time="2026-03-07T01:50:20.146249855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:50:20.156076 containerd[1468]: time="2026-03-07T01:50:20.146268430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:20.156076 containerd[1468]: time="2026-03-07T01:50:20.146472065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:20.259980 update_engine[1459]: I20260307 01:50:20.259796 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:50:20.260516 update_engine[1459]: I20260307 01:50:20.260293 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:50:20.268217 update_engine[1459]: I20260307 01:50:20.268117 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:50:20.293221 update_engine[1459]: E20260307 01:50:20.293077 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:50:20.293221 update_engine[1459]: I20260307 01:50:20.293175 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 01:50:20.331008 systemd[1]: Started cri-containerd-c52d2b71aeffc7a14c8342f3c4204f4bc109be055fd8dd6540142d7b50c13e53.scope - libcontainer container c52d2b71aeffc7a14c8342f3c4204f4bc109be055fd8dd6540142d7b50c13e53. Mar 7 01:50:20.494813 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:50:20.544656 kubelet[2622]: E0307 01:50:20.544387 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:20.560691 containerd[1468]: time="2026-03-07T01:50:20.555684737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6kp77,Uid:920ee015-9421-4414-8db8-245fb0cf77ac,Namespace:kube-system,Attempt:0,}" Mar 7 01:50:21.150807 containerd[1468]: time="2026-03-07T01:50:21.148215686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hwvkx,Uid:32b9a46a-da94-4354-9286-87a0a76c1c8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c52d2b71aeffc7a14c8342f3c4204f4bc109be055fd8dd6540142d7b50c13e53\"" Mar 7 01:50:21.150958 kubelet[2622]: E0307 01:50:21.149496 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:21.161007 systemd-networkd[1398]: vethb1141d4f: Link UP Mar 7 01:50:21.178944 kernel: cni0: port 2(vethb1141d4f) entered blocking state Mar 7 01:50:21.179073 kernel: cni0: port 2(vethb1141d4f) entered disabled state Mar 7 01:50:21.187480 kernel: vethb1141d4f: entered allmulticast mode Mar 7 01:50:21.187695 kernel: vethb1141d4f: entered promiscuous mode Mar 7 01:50:21.225364 containerd[1468]: time="2026-03-07T01:50:21.220300577Z" level=info msg="CreateContainer within sandbox \"c52d2b71aeffc7a14c8342f3c4204f4bc109be055fd8dd6540142d7b50c13e53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:50:21.245857 kernel: cni0: port 2(vethb1141d4f) entered blocking state Mar 7 01:50:21.245983 kernel: cni0: port 2(vethb1141d4f) entered forwarding state Mar 7 01:50:21.251986 systemd-networkd[1398]: vethb1141d4f: Gained carrier Mar 7 01:50:21.266204 systemd-networkd[1398]: cni0: Gained IPv6LL Mar 7 01:50:21.340501 containerd[1468]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000012990), "name":"cbr0", "type":"bridge"} Mar 7 01:50:21.340501 containerd[1468]: delegateAdd: netconf sent to delegate plugin: Mar 7 01:50:21.424273 containerd[1468]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-07T01:50:21.410047943Z" level=info msg="CreateContainer within sandbox \"c52d2b71aeffc7a14c8342f3c4204f4bc109be055fd8dd6540142d7b50c13e53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91e750f88357b5f908087c2b8dcac4fb8953c2174953337c022f8aaeabb17dcf\"" Mar 7 01:50:21.424273 containerd[1468]: time="2026-03-07T01:50:21.413828444Z" level=info msg="StartContainer for \"91e750f88357b5f908087c2b8dcac4fb8953c2174953337c022f8aaeabb17dcf\"" Mar 7 01:50:21.563167 containerd[1468]: time="2026-03-07T01:50:21.560160912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:50:21.563167 containerd[1468]: time="2026-03-07T01:50:21.560233600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:50:21.563167 containerd[1468]: time="2026-03-07T01:50:21.560285968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:21.563167 containerd[1468]: time="2026-03-07T01:50:21.560412738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:50:21.573102 systemd[1]: Started cri-containerd-91e750f88357b5f908087c2b8dcac4fb8953c2174953337c022f8aaeabb17dcf.scope - libcontainer container 91e750f88357b5f908087c2b8dcac4fb8953c2174953337c022f8aaeabb17dcf. Mar 7 01:50:21.667252 systemd[1]: Started cri-containerd-bdbff745f7acf08e3889c81b1cc8022efb39fa3d659b8e08ed42210047a917d3.scope - libcontainer container bdbff745f7acf08e3889c81b1cc8022efb39fa3d659b8e08ed42210047a917d3. Mar 7 01:50:21.736898 systemd-resolved[1400]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:50:21.751146 systemd-networkd[1398]: veth71bc6c0b: Gained IPv6LL Mar 7 01:50:21.754417 containerd[1468]: time="2026-03-07T01:50:21.754321055Z" level=info msg="StartContainer for \"91e750f88357b5f908087c2b8dcac4fb8953c2174953337c022f8aaeabb17dcf\" returns successfully" Mar 7 01:50:22.024850 containerd[1468]: time="2026-03-07T01:50:22.022449174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6kp77,Uid:920ee015-9421-4414-8db8-245fb0cf77ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdbff745f7acf08e3889c81b1cc8022efb39fa3d659b8e08ed42210047a917d3\"" Mar 7 01:50:22.033687 kubelet[2622]: E0307 01:50:22.028696 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:22.095965 containerd[1468]: time="2026-03-07T01:50:22.095903968Z" level=info msg="CreateContainer within sandbox \"bdbff745f7acf08e3889c81b1cc8022efb39fa3d659b8e08ed42210047a917d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:50:22.336001 containerd[1468]: time="2026-03-07T01:50:22.335209847Z" level=info msg="CreateContainer within sandbox \"bdbff745f7acf08e3889c81b1cc8022efb39fa3d659b8e08ed42210047a917d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9bf2051e8283f5734678d891b50f22d9a0c0a27e2b35ec7dac4bf7fae713e3f\"" Mar 7 01:50:22.351040 containerd[1468]: time="2026-03-07T01:50:22.343915376Z" level=info msg="StartContainer for \"f9bf2051e8283f5734678d891b50f22d9a0c0a27e2b35ec7dac4bf7fae713e3f\"" Mar 7 01:50:22.383084 kubelet[2622]: E0307 01:50:22.381485 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:22.549511 kubelet[2622]: I0307 01:50:22.544500 2622 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-hwvkx" podStartSLOduration=50.544479949 podStartE2EDuration="50.544479949s" podCreationTimestamp="2026-03-07 01:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:50:22.544454377 +0000 UTC m=+53.514353940" watchObservedRunningTime="2026-03-07 01:50:22.544479949 +0000 UTC m=+53.514379502" Mar 7 01:50:22.619715 systemd-networkd[1398]: vethb1141d4f: Gained IPv6LL Mar 7 01:50:22.633854 systemd[1]: Started cri-containerd-f9bf2051e8283f5734678d891b50f22d9a0c0a27e2b35ec7dac4bf7fae713e3f.scope - libcontainer container f9bf2051e8283f5734678d891b50f22d9a0c0a27e2b35ec7dac4bf7fae713e3f. Mar 7 01:50:23.051316 containerd[1468]: time="2026-03-07T01:50:23.045231030Z" level=info msg="StartContainer for \"f9bf2051e8283f5734678d891b50f22d9a0c0a27e2b35ec7dac4bf7fae713e3f\" returns successfully" Mar 7 01:50:23.463276 kubelet[2622]: E0307 01:50:23.462348 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:23.467407 kubelet[2622]: E0307 01:50:23.465625 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:23.671365 kubelet[2622]: I0307 01:50:23.670485 2622 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6kp77" podStartSLOduration=51.670468496 podStartE2EDuration="51.670468496s" podCreationTimestamp="2026-03-07 01:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:50:23.551465697 +0000 UTC m=+54.521365249" watchObservedRunningTime="2026-03-07 01:50:23.670468496 +0000 UTC m=+54.640368009" Mar 7 01:50:24.476402 kubelet[2622]: E0307 01:50:24.474944 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:24.476402 kubelet[2622]: E0307 01:50:24.476333 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:25.478813 kubelet[2622]: E0307 01:50:25.478324 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:30.256134 update_engine[1459]: I20260307 01:50:30.251029 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:50:30.261356 update_engine[1459]: I20260307 01:50:30.257460 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:50:30.261356 update_engine[1459]: I20260307 01:50:30.258010 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:50:30.302062 update_engine[1459]: E20260307 01:50:30.301819 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:50:30.302062 update_engine[1459]: I20260307 01:50:30.302006 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 01:50:40.258578 update_engine[1459]: I20260307 01:50:40.256283 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:50:40.258578 update_engine[1459]: I20260307 01:50:40.256891 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:50:40.258578 update_engine[1459]: I20260307 01:50:40.258301 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:50:40.312121 update_engine[1459]: E20260307 01:50:40.309492 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.309685 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.309707 1459 omaha_request_action.cc:617] Omaha request response: Mar 7 01:50:40.312121 update_engine[1459]: E20260307 01:50:40.309929 1459 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.309973 1459 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.309986 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.309996 1459 update_attempter.cc:306] Processing Done. Mar 7 01:50:40.312121 update_engine[1459]: E20260307 01:50:40.310017 1459 update_attempter.cc:619] Update failed. Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.310029 1459 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.310038 1459 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.310050 1459 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.311292 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.311338 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:50:40.312121 update_engine[1459]: I20260307 01:50:40.311353 1459 omaha_request_action.cc:272] Request: Mar 7 01:50:40.312121 update_engine[1459]: Mar 7 01:50:40.312121 update_engine[1459]: Mar 7 01:50:40.312923 update_engine[1459]: Mar 7 01:50:40.312923 update_engine[1459]: Mar 7 01:50:40.312923 update_engine[1459]: Mar 7 01:50:40.312923 update_engine[1459]: Mar 7 01:50:40.312923 update_engine[1459]: I20260307 01:50:40.311367 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:50:40.312923 update_engine[1459]: I20260307 01:50:40.311776 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:50:40.312923 update_engine[1459]: I20260307 01:50:40.312065 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:50:40.318358 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 01:50:40.338245 update_engine[1459]: E20260307 01:50:40.338048 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:50:40.343254 update_engine[1459]: I20260307 01:50:40.340752 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:50:40.343254 update_engine[1459]: I20260307 01:50:40.340826 1459 omaha_request_action.cc:617] Omaha request response: Mar 7 01:50:40.343254 update_engine[1459]: I20260307 01:50:40.340848 1459 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:50:40.343254 update_engine[1459]: I20260307 01:50:40.340860 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:50:40.343254 update_engine[1459]: I20260307 01:50:40.340874 1459 update_attempter.cc:306] Processing Done. Mar 7 01:50:40.343254 update_engine[1459]: I20260307 01:50:40.340888 1459 update_attempter.cc:310] Error event sent. Mar 7 01:50:40.343254 update_engine[1459]: I20260307 01:50:40.340909 1459 update_check_scheduler.cc:74] Next update check in 49m23s Mar 7 01:50:40.348051 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 01:50:43.347370 kubelet[2622]: E0307 01:50:43.344488 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:51.363670 kubelet[2622]: E0307 01:50:51.363057 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:50:53.357912 kubelet[2622]: E0307 01:50:53.352785 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:02.361340 kubelet[2622]: E0307 01:51:02.348292 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:30.343750 kubelet[2622]: E0307 01:51:30.341944 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:35.366045 kubelet[2622]: E0307 01:51:35.358364 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:42.348841 kubelet[2622]: E0307 01:51:42.343264 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:46.343176 kubelet[2622]: E0307 01:51:46.340782 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:51:54.360917 kubelet[2622]: E0307 01:51:54.360771 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:10.349642 kubelet[2622]: E0307 01:52:10.347807 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:10.829199 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:47080.service - OpenSSH per-connection server daemon (10.0.0.1:47080). Mar 7 01:52:11.134988 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 47080 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:11.174743 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:11.419710 systemd-logind[1455]: New session 8 of user core. Mar 7 01:52:11.470332 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:52:11.953946 sshd[3992]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:11.974518 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:47080.service: Deactivated successfully. Mar 7 01:52:11.987738 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:52:11.997273 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:52:12.000942 systemd-logind[1455]: Removed session 8. Mar 7 01:52:16.983231 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:47088.service - OpenSSH per-connection server daemon (10.0.0.1:47088). Mar 7 01:52:17.054179 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 47088 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:17.056894 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:17.080295 systemd-logind[1455]: New session 9 of user core. Mar 7 01:52:17.095427 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:52:17.378380 sshd[4038]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:17.391180 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:47088.service: Deactivated successfully. Mar 7 01:52:17.398096 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:52:17.402507 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:52:17.406072 systemd-logind[1455]: Removed session 9. Mar 7 01:52:21.367306 kubelet[2622]: E0307 01:52:21.344327 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:22.411042 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:52670.service - OpenSSH per-connection server daemon (10.0.0.1:52670). Mar 7 01:52:22.556310 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 52670 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:22.566142 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:22.638342 systemd-logind[1455]: New session 10 of user core. Mar 7 01:52:22.668396 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:52:23.453018 sshd[4078]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:23.460948 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:52670.service: Deactivated successfully. Mar 7 01:52:23.470807 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:52:23.508333 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:52:23.524056 systemd-logind[1455]: Removed session 10. Mar 7 01:52:28.526849 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:52684.service - OpenSSH per-connection server daemon (10.0.0.1:52684). Mar 7 01:52:28.648762 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 52684 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:28.654512 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:28.701147 systemd-logind[1455]: New session 11 of user core. Mar 7 01:52:28.707630 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:52:29.125097 sshd[4113]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:29.148105 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:52684.service: Deactivated successfully. Mar 7 01:52:29.158023 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:52:29.183438 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:52:29.186262 systemd-logind[1455]: Removed session 11. Mar 7 01:52:34.184617 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:35340.service - OpenSSH per-connection server daemon (10.0.0.1:35340). Mar 7 01:52:34.285752 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 35340 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:34.299913 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:34.332898 systemd-logind[1455]: New session 12 of user core. Mar 7 01:52:34.367144 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:52:34.869840 sshd[4150]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:34.884923 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:52:34.896597 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:35340.service: Deactivated successfully. Mar 7 01:52:34.942174 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:52:34.960665 systemd-logind[1455]: Removed session 12. Mar 7 01:52:39.947600 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:35344.service - OpenSSH per-connection server daemon (10.0.0.1:35344). Mar 7 01:52:40.128446 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 35344 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:40.146370 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:40.195759 systemd-logind[1455]: New session 13 of user core. Mar 7 01:52:40.233269 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:52:40.846410 sshd[4199]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:40.869083 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:52:40.882980 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:35344.service: Deactivated successfully. Mar 7 01:52:40.918627 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:52:40.934196 systemd-logind[1455]: Removed session 13. Mar 7 01:52:45.909136 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:58128.service - OpenSSH per-connection server daemon (10.0.0.1:58128). Mar 7 01:52:46.024652 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 58128 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:46.031318 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:46.055187 systemd-logind[1455]: New session 14 of user core. Mar 7 01:52:46.069896 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:52:46.349372 kubelet[2622]: E0307 01:52:46.348397 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:46.670878 sshd[4236]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:46.680801 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:58128.service: Deactivated successfully. Mar 7 01:52:46.689363 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:52:46.702994 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:52:46.716043 systemd-logind[1455]: Removed session 14. Mar 7 01:52:47.350791 kubelet[2622]: E0307 01:52:47.346302 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:50.341786 kubelet[2622]: E0307 01:52:50.340235 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:52:51.741162 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:53412.service - OpenSSH per-connection server daemon (10.0.0.1:53412). Mar 7 01:52:51.848281 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 53412 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:51.860727 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:51.890841 systemd-logind[1455]: New session 15 of user core. Mar 7 01:52:51.903744 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:52:52.466305 sshd[4271]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:52.535865 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:53412.service: Deactivated successfully. Mar 7 01:52:52.551677 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:52:52.564622 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:52:52.611822 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:53418.service - OpenSSH per-connection server daemon (10.0.0.1:53418). Mar 7 01:52:52.614778 systemd-logind[1455]: Removed session 15. Mar 7 01:52:52.696284 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 53418 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:52.712247 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:52.740290 systemd-logind[1455]: New session 16 of user core. Mar 7 01:52:52.764228 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:52:53.380112 sshd[4292]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:53.482986 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:53434.service - OpenSSH per-connection server daemon (10.0.0.1:53434). Mar 7 01:52:53.485524 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:53418.service: Deactivated successfully. Mar 7 01:52:53.496185 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:52:53.508354 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:52:53.511914 systemd-logind[1455]: Removed session 16. Mar 7 01:52:53.672773 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 53434 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:53.705007 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:53.748368 systemd-logind[1455]: New session 17 of user core. Mar 7 01:52:53.770079 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:52:54.166755 sshd[4304]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:54.173223 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:53434.service: Deactivated successfully. Mar 7 01:52:54.178039 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:52:54.182229 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:52:54.192454 systemd-logind[1455]: Removed session 17. Mar 7 01:52:59.220501 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:53448.service - OpenSSH per-connection server daemon (10.0.0.1:53448). Mar 7 01:52:59.348812 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 53448 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:52:59.350593 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:52:59.371420 systemd-logind[1455]: New session 18 of user core. Mar 7 01:52:59.396043 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:52:59.862024 sshd[4340]: pam_unix(sshd:session): session closed for user core Mar 7 01:52:59.879127 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:53448.service: Deactivated successfully. Mar 7 01:52:59.897370 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:52:59.915375 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:52:59.919700 systemd-logind[1455]: Removed session 18. Mar 7 01:53:04.345920 kubelet[2622]: E0307 01:53:04.345035 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:04.927059 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:43326.service - OpenSSH per-connection server daemon (10.0.0.1:43326). Mar 7 01:53:05.090708 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 43326 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:05.116171 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:05.185369 systemd-logind[1455]: New session 19 of user core. Mar 7 01:53:05.207491 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:53:05.870257 sshd[4375]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:05.923005 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:43326.service: Deactivated successfully. Mar 7 01:53:05.924459 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:53:05.946208 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:53:05.958351 systemd-logind[1455]: Removed session 19. Mar 7 01:53:10.953740 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:50356.service - OpenSSH per-connection server daemon (10.0.0.1:50356). Mar 7 01:53:11.133912 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 50356 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:11.139419 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:11.155020 systemd-logind[1455]: New session 20 of user core. Mar 7 01:53:11.178939 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:53:11.360187 kubelet[2622]: E0307 01:53:11.359666 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:11.641389 sshd[4423]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:11.668058 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:50356.service: Deactivated successfully. Mar 7 01:53:11.677585 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:53:11.704116 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:53:11.723024 systemd-logind[1455]: Removed session 20. Mar 7 01:53:16.697920 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:50370.service - OpenSSH per-connection server daemon (10.0.0.1:50370). Mar 7 01:53:16.877979 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 50370 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:16.883812 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:16.925035 systemd-logind[1455]: New session 21 of user core. Mar 7 01:53:16.946088 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:53:17.607009 sshd[4459]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:17.626745 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:50370.service: Deactivated successfully. Mar 7 01:53:17.639271 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:53:17.646678 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:53:17.660257 systemd-logind[1455]: Removed session 21. Mar 7 01:53:22.661213 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:40622.service - OpenSSH per-connection server daemon (10.0.0.1:40622). Mar 7 01:53:22.755158 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 40622 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:22.761249 sshd[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:22.780699 systemd-logind[1455]: New session 22 of user core. Mar 7 01:53:22.793261 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:53:23.550713 sshd[4499]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:23.572858 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:40622.service: Deactivated successfully. Mar 7 01:53:23.584294 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:53:23.591086 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:53:23.597469 systemd-logind[1455]: Removed session 22. Mar 7 01:53:28.656082 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:40638.service - OpenSSH per-connection server daemon (10.0.0.1:40638). Mar 7 01:53:29.149941 sshd[4534]: Accepted publickey for core from 10.0.0.1 port 40638 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:29.156146 sshd[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:29.182447 systemd-logind[1455]: New session 23 of user core. Mar 7 01:53:29.202041 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:53:29.793729 sshd[4534]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:29.828327 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:40638.service: Deactivated successfully. Mar 7 01:53:29.836038 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:53:29.843651 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:53:29.883152 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:40640.service - OpenSSH per-connection server daemon (10.0.0.1:40640). Mar 7 01:53:29.888378 systemd-logind[1455]: Removed session 23. Mar 7 01:53:29.956083 sshd[4556]: Accepted publickey for core from 10.0.0.1 port 40640 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:29.961927 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:29.986807 systemd-logind[1455]: New session 24 of user core. Mar 7 01:53:30.016336 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:53:30.906018 sshd[4556]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:30.952904 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:40640.service: Deactivated successfully. Mar 7 01:53:30.962785 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:53:30.987600 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:53:31.020248 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:55396.service - OpenSSH per-connection server daemon (10.0.0.1:55396). Mar 7 01:53:31.024689 systemd-logind[1455]: Removed session 24. Mar 7 01:53:31.174035 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 55396 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:31.186245 sshd[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:31.251092 systemd-logind[1455]: New session 25 of user core. Mar 7 01:53:31.269338 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:53:33.236878 sshd[4569]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:33.256735 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:55396.service: Deactivated successfully. Mar 7 01:53:33.265252 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:53:33.269154 systemd[1]: session-25.scope: Consumed 1.034s CPU time. Mar 7 01:53:33.273824 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:53:33.291462 systemd[1]: Started sshd@25-10.0.0.112:22-10.0.0.1:55408.service - OpenSSH per-connection server daemon (10.0.0.1:55408). Mar 7 01:53:33.294523 systemd-logind[1455]: Removed session 25. Mar 7 01:53:33.480402 sshd[4611]: Accepted publickey for core from 10.0.0.1 port 55408 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:33.483307 sshd[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:33.514916 systemd-logind[1455]: New session 26 of user core. Mar 7 01:53:33.533652 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:53:34.272455 sshd[4611]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:34.334641 systemd[1]: sshd@25-10.0.0.112:22-10.0.0.1:55408.service: Deactivated successfully. Mar 7 01:53:34.341801 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:53:34.344092 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:53:34.374951 systemd[1]: Started sshd@26-10.0.0.112:22-10.0.0.1:55412.service - OpenSSH per-connection server daemon (10.0.0.1:55412). Mar 7 01:53:34.382393 systemd-logind[1455]: Removed session 26. Mar 7 01:53:34.485481 sshd[4626]: Accepted publickey for core from 10.0.0.1 port 55412 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:34.493803 sshd[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:34.522148 systemd-logind[1455]: New session 27 of user core. Mar 7 01:53:34.538907 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:53:34.856603 sshd[4626]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:34.875754 systemd[1]: sshd@26-10.0.0.112:22-10.0.0.1:55412.service: Deactivated successfully. Mar 7 01:53:34.879140 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:53:34.881964 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:53:34.887285 systemd-logind[1455]: Removed session 27. Mar 7 01:53:37.343711 kubelet[2622]: E0307 01:53:37.343045 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:39.954935 systemd[1]: Started sshd@27-10.0.0.112:22-10.0.0.1:55416.service - OpenSSH per-connection server daemon (10.0.0.1:55416). Mar 7 01:53:40.098165 sshd[4660]: Accepted publickey for core from 10.0.0.1 port 55416 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:40.102365 sshd[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:40.143777 systemd-logind[1455]: New session 28 of user core. Mar 7 01:53:40.159198 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:53:40.771740 sshd[4660]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:40.842863 systemd[1]: sshd@27-10.0.0.112:22-10.0.0.1:55416.service: Deactivated successfully. Mar 7 01:53:40.873347 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:53:40.936017 systemd-logind[1455]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:53:40.945352 systemd-logind[1455]: Removed session 28. Mar 7 01:53:45.912444 systemd[1]: Started sshd@28-10.0.0.112:22-10.0.0.1:36372.service - OpenSSH per-connection server daemon (10.0.0.1:36372). Mar 7 01:53:46.143071 sshd[4696]: Accepted publickey for core from 10.0.0.1 port 36372 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:46.159494 sshd[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:46.189013 systemd-logind[1455]: New session 29 of user core. Mar 7 01:53:46.219347 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 01:53:46.730370 sshd[4696]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:46.758053 systemd[1]: sshd@28-10.0.0.112:22-10.0.0.1:36372.service: Deactivated successfully. Mar 7 01:53:46.781513 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 01:53:46.783124 systemd-logind[1455]: Session 29 logged out. Waiting for processes to exit. Mar 7 01:53:46.793445 systemd-logind[1455]: Removed session 29. Mar 7 01:53:51.348827 kubelet[2622]: E0307 01:53:51.346090 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:51.795742 systemd[1]: Started sshd@29-10.0.0.112:22-10.0.0.1:49072.service - OpenSSH per-connection server daemon (10.0.0.1:49072). Mar 7 01:53:51.985615 sshd[4731]: Accepted publickey for core from 10.0.0.1 port 49072 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:51.982420 sshd[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:52.007600 systemd-logind[1455]: New session 30 of user core. Mar 7 01:53:52.021209 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 01:53:52.456521 sshd[4731]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:52.466153 systemd[1]: sshd@29-10.0.0.112:22-10.0.0.1:49072.service: Deactivated successfully. Mar 7 01:53:52.475048 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 01:53:52.482664 systemd-logind[1455]: Session 30 logged out. Waiting for processes to exit. Mar 7 01:53:52.492296 systemd-logind[1455]: Removed session 30. Mar 7 01:53:57.497712 systemd[1]: Started sshd@30-10.0.0.112:22-10.0.0.1:49076.service - OpenSSH per-connection server daemon (10.0.0.1:49076). Mar 7 01:53:57.641260 sshd[4779]: Accepted publickey for core from 10.0.0.1 port 49076 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:53:57.661815 sshd[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:53:57.720082 systemd-logind[1455]: New session 31 of user core. Mar 7 01:53:57.747509 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 01:53:58.334613 sshd[4779]: pam_unix(sshd:session): session closed for user core Mar 7 01:53:58.346184 systemd[1]: sshd@30-10.0.0.112:22-10.0.0.1:49076.service: Deactivated successfully. Mar 7 01:53:58.349740 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 01:53:58.356256 systemd-logind[1455]: Session 31 logged out. Waiting for processes to exit. Mar 7 01:53:58.371524 systemd-logind[1455]: Removed session 31. Mar 7 01:54:02.340469 kubelet[2622]: E0307 01:54:02.339952 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:03.430701 systemd[1]: Started sshd@31-10.0.0.112:22-10.0.0.1:56626.service - OpenSSH per-connection server daemon (10.0.0.1:56626). Mar 7 01:54:03.548336 sshd[4819]: Accepted publickey for core from 10.0.0.1 port 56626 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:54:03.556477 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:03.617816 systemd-logind[1455]: New session 32 of user core. Mar 7 01:54:03.648005 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 01:54:04.079275 sshd[4819]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:04.088835 systemd[1]: sshd@31-10.0.0.112:22-10.0.0.1:56626.service: Deactivated successfully. Mar 7 01:54:04.097899 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 01:54:04.129432 systemd-logind[1455]: Session 32 logged out. Waiting for processes to exit. Mar 7 01:54:04.131442 systemd-logind[1455]: Removed session 32. Mar 7 01:54:09.195746 systemd[1]: Started sshd@32-10.0.0.112:22-10.0.0.1:56634.service - OpenSSH per-connection server daemon (10.0.0.1:56634). Mar 7 01:54:09.365360 sshd[4853]: Accepted publickey for core from 10.0.0.1 port 56634 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:54:09.373003 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:09.398515 systemd-logind[1455]: New session 33 of user core. Mar 7 01:54:09.430331 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 01:54:10.018638 sshd[4853]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:10.044403 systemd[1]: sshd@32-10.0.0.112:22-10.0.0.1:56634.service: Deactivated successfully. Mar 7 01:54:10.066150 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 01:54:10.073304 systemd-logind[1455]: Session 33 logged out. Waiting for processes to exit. Mar 7 01:54:10.095937 systemd-logind[1455]: Removed session 33. Mar 7 01:54:11.348400 kubelet[2622]: E0307 01:54:11.345796 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:13.354705 kubelet[2622]: E0307 01:54:13.354096 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:15.074113 systemd[1]: Started sshd@33-10.0.0.112:22-10.0.0.1:55806.service - OpenSSH per-connection server daemon (10.0.0.1:55806). Mar 7 01:54:15.174624 sshd[4891]: Accepted publickey for core from 10.0.0.1 port 55806 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:54:15.182202 sshd[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:15.231018 systemd-logind[1455]: New session 34 of user core. Mar 7 01:54:15.252279 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 7 01:54:15.360192 kubelet[2622]: E0307 01:54:15.342030 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:15.950180 sshd[4891]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:15.975175 systemd-logind[1455]: Session 34 logged out. Waiting for processes to exit. Mar 7 01:54:15.996075 systemd[1]: sshd@33-10.0.0.112:22-10.0.0.1:55806.service: Deactivated successfully. Mar 7 01:54:16.017411 systemd[1]: session-34.scope: Deactivated successfully. Mar 7 01:54:16.042977 systemd-logind[1455]: Removed session 34. Mar 7 01:54:17.352777 kubelet[2622]: E0307 01:54:17.349347 2622 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:21.005953 systemd[1]: Started sshd@34-10.0.0.112:22-10.0.0.1:39906.service - OpenSSH per-connection server daemon (10.0.0.1:39906). Mar 7 01:54:21.106891 sshd[4926]: Accepted publickey for core from 10.0.0.1 port 39906 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:54:21.122087 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:21.210424 systemd-logind[1455]: New session 35 of user core. Mar 7 01:54:21.277243 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 7 01:54:21.882100 sshd[4926]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:21.914312 systemd[1]: sshd@34-10.0.0.112:22-10.0.0.1:39906.service: Deactivated successfully. Mar 7 01:54:21.926410 systemd[1]: session-35.scope: Deactivated successfully. Mar 7 01:54:21.932291 systemd-logind[1455]: Session 35 logged out. Waiting for processes to exit. Mar 7 01:54:21.944453 systemd-logind[1455]: Removed session 35. Mar 7 01:54:26.963831 systemd[1]: Started sshd@35-10.0.0.112:22-10.0.0.1:39922.service - OpenSSH per-connection server daemon (10.0.0.1:39922). Mar 7 01:54:27.093275 sshd[4960]: Accepted publickey for core from 10.0.0.1 port 39922 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:54:27.101289 sshd[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:54:27.129791 systemd-logind[1455]: New session 36 of user core. Mar 7 01:54:27.144982 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 7 01:54:27.568699 sshd[4960]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:27.591788 systemd[1]: sshd@35-10.0.0.112:22-10.0.0.1:39922.service: Deactivated successfully. Mar 7 01:54:27.596514 systemd[1]: session-36.scope: Deactivated successfully. Mar 7 01:54:27.609297 systemd-logind[1455]: Session 36 logged out. Waiting for processes to exit. Mar 7 01:54:27.616420 systemd-logind[1455]: Removed session 36.