Apr 16 00:59:53.580921 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 00:59:53.580972 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 00:59:53.580983 kernel: BIOS-provided physical RAM map: Apr 16 00:59:53.580987 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 00:59:53.580992 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 00:59:53.580996 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 00:59:53.581001 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 00:59:53.581006 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 00:59:53.581010 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 00:59:53.581016 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 00:59:53.581021 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 00:59:53.581025 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 00:59:53.581048 kernel: NX (Execute Disable) protection: active Apr 16 00:59:53.581053 kernel: APIC: Static calls initialized Apr 16 00:59:53.581059 kernel: SMBIOS 2.8 present. Apr 16 00:59:53.581081 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 00:59:53.581086 kernel: Hypervisor detected: KVM Apr 16 00:59:53.581090 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 00:59:53.581095 kernel: kvm-clock: using sched offset of 9207295260 cycles Apr 16 00:59:53.581101 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 00:59:53.581106 kernel: tsc: Detected 2793.438 MHz processor Apr 16 00:59:53.581111 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 00:59:53.581116 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 00:59:53.581121 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 00:59:53.581129 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 00:59:53.581134 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 00:59:53.581138 kernel: Using GB pages for direct mapping Apr 16 00:59:53.581143 kernel: ACPI: Early table checksum verification disabled Apr 16 00:59:53.581148 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 00:59:53.581153 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:59:53.581176 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:59:53.581181 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:59:53.581186 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 00:59:53.581193 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:59:53.581197 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:59:53.581202 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:59:53.581207 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:59:53.581212 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 00:59:53.581217 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 00:59:53.581222 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 00:59:53.581230 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 00:59:53.581236 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 00:59:53.581241 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 00:59:53.581247 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 00:59:53.581251 kernel: No NUMA configuration found Apr 16 00:59:53.581257 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 00:59:53.581262 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 16 00:59:53.581268 kernel: Zone ranges: Apr 16 00:59:53.581273 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 00:59:53.581279 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 00:59:53.581284 kernel: Normal empty Apr 16 00:59:53.581289 kernel: Movable zone start for each node Apr 16 00:59:53.581294 kernel: Early memory node ranges Apr 16 00:59:53.581299 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 00:59:53.581304 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 00:59:53.581309 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 00:59:53.581316 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 00:59:53.581321 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 00:59:53.581335 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 00:59:53.581341 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 00:59:53.581346 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 00:59:53.581351 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 00:59:53.581356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 00:59:53.581362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 00:59:53.581367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 00:59:53.581373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 00:59:53.581378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 00:59:53.581383 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 00:59:53.581389 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 00:59:53.581393 kernel: TSC deadline timer available Apr 16 00:59:53.581399 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 16 00:59:53.581404 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 00:59:53.581409 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 00:59:53.581414 kernel: kvm-guest: setup PV sched yield Apr 16 00:59:53.581428 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 00:59:53.581435 kernel: Booting paravirtualized kernel on KVM Apr 16 00:59:53.581440 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 00:59:53.581446 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 00:59:53.581451 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 16 00:59:53.581456 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 16 00:59:53.581461 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 00:59:53.581466 kernel: kvm-guest: PV spinlocks enabled Apr 16 00:59:53.581471 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 00:59:53.581477 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 00:59:53.581485 kernel: random: crng init done Apr 16 00:59:53.581490 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 00:59:53.581495 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 00:59:53.581500 kernel: Fallback order for Node 0: 0 Apr 16 00:59:53.581505 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 16 00:59:53.581510 kernel: Policy zone: DMA32 Apr 16 00:59:53.581515 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 00:59:53.581521 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 16 00:59:53.581528 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 00:59:53.581533 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 00:59:53.581538 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 00:59:53.581543 kernel: Dynamic Preempt: voluntary Apr 16 00:59:53.581548 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 00:59:53.581570 kernel: rcu: RCU event tracing is enabled. Apr 16 00:59:53.581576 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 00:59:53.581581 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 00:59:53.581586 kernel: Rude variant of Tasks RCU enabled. Apr 16 00:59:53.581593 kernel: Tracing variant of Tasks RCU enabled. Apr 16 00:59:53.581599 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 00:59:53.581604 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 00:59:53.581633 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 00:59:53.581647 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 00:59:53.581653 kernel: Console: colour VGA+ 80x25 Apr 16 00:59:53.581658 kernel: printk: console [ttyS0] enabled Apr 16 00:59:53.581663 kernel: ACPI: Core revision 20230628 Apr 16 00:59:53.581668 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 00:59:53.581675 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 00:59:53.581680 kernel: x2apic enabled Apr 16 00:59:53.581686 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 00:59:53.581691 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 00:59:53.581696 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 00:59:53.581701 kernel: kvm-guest: setup PV IPIs Apr 16 00:59:53.581706 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 00:59:53.581712 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 00:59:53.581724 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 00:59:53.581729 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 00:59:53.581735 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 00:59:53.581742 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 00:59:53.581748 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 00:59:53.581753 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 00:59:53.581759 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 00:59:53.581765 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 00:59:53.581772 kernel: RETBleed: Vulnerable Apr 16 00:59:53.581778 kernel: Speculative Store Bypass: Vulnerable Apr 16 00:59:53.581783 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 00:59:53.581798 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 00:59:53.581803 kernel: active return thunk: its_return_thunk Apr 16 00:59:53.581809 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 00:59:53.581815 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 00:59:53.581821 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 00:59:53.581826 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 00:59:53.581834 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 00:59:53.581840 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 00:59:53.581845 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 00:59:53.581851 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 00:59:53.581856 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 00:59:53.581862 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 00:59:53.581868 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 00:59:53.581873 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 00:59:53.581879 kernel: Freeing SMP alternatives memory: 32K Apr 16 00:59:53.581886 kernel: pid_max: default: 32768 minimum: 301 Apr 16 00:59:53.581891 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 00:59:53.581897 kernel: landlock: Up and running. Apr 16 00:59:53.581903 kernel: SELinux: Initializing. Apr 16 00:59:53.581908 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 00:59:53.581914 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 00:59:53.581920 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 00:59:53.581934 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 00:59:53.581940 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 00:59:53.581947 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 00:59:53.581953 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 00:59:53.581959 kernel: signal: max sigframe size: 3632 Apr 16 00:59:53.581964 kernel: rcu: Hierarchical SRCU implementation. Apr 16 00:59:53.581970 kernel: rcu: Max phase no-delay instances is 400. Apr 16 00:59:53.581975 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 00:59:53.581981 kernel: smp: Bringing up secondary CPUs ... Apr 16 00:59:53.581986 kernel: smpboot: x86: Booting SMP configuration: Apr 16 00:59:53.581992 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 00:59:53.581999 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 00:59:53.582005 kernel: smpboot: Max logical packages: 1 Apr 16 00:59:53.582010 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 00:59:53.582016 kernel: devtmpfs: initialized Apr 16 00:59:53.582021 kernel: x86/mm: Memory block size: 128MB Apr 16 00:59:53.582027 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 00:59:53.582033 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 00:59:53.582038 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 00:59:53.582044 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 00:59:53.582051 kernel: audit: initializing netlink subsys (disabled) Apr 16 00:59:53.582057 kernel: audit: type=2000 audit(1776301189.153:1): state=initialized audit_enabled=0 res=1 Apr 16 00:59:53.582062 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 00:59:53.582068 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 00:59:53.582073 kernel: cpuidle: using governor menu Apr 16 00:59:53.582079 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 00:59:53.582084 kernel: dca service started, version 1.12.1 Apr 16 00:59:53.582090 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 00:59:53.582096 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 00:59:53.582103 kernel: PCI: Using configuration type 1 for base access Apr 16 00:59:53.582108 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 00:59:53.582114 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 00:59:53.582119 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 00:59:53.582125 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 00:59:53.582130 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 00:59:53.582136 kernel: ACPI: Added _OSI(Module Device) Apr 16 00:59:53.582141 kernel: ACPI: Added _OSI(Processor Device) Apr 16 00:59:53.582147 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 00:59:53.582154 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 00:59:53.582174 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 00:59:53.582180 kernel: ACPI: Interpreter enabled Apr 16 00:59:53.582186 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 00:59:53.582191 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 00:59:53.582197 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 00:59:53.582203 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 00:59:53.582208 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 00:59:53.582215 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 00:59:53.582513 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 00:59:53.582589 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 00:59:53.582725 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 00:59:53.582734 kernel: PCI host bridge to bus 0000:00 Apr 16 00:59:53.582826 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 00:59:53.582883 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 00:59:53.582942 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 00:59:53.582996 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 00:59:53.583049 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 00:59:53.583102 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 00:59:53.583176 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 00:59:53.583310 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 00:59:53.583403 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 16 00:59:53.583472 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 16 00:59:53.583534 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 16 00:59:53.583594 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 16 00:59:53.583688 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 00:59:53.583794 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 16 00:59:53.583858 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 16 00:59:53.583923 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 16 00:59:53.583983 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 00:59:53.584120 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 16 00:59:53.584203 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 16 00:59:53.584271 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 16 00:59:53.584367 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 00:59:53.584504 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 16 00:59:53.584641 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 16 00:59:53.584739 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 16 00:59:53.584829 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 00:59:53.584985 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 16 00:59:53.585153 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 00:59:53.585240 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 00:59:53.585374 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 00:59:53.585443 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 16 00:59:53.585503 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 16 00:59:53.585659 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 00:59:53.585728 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 16 00:59:53.585735 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 00:59:53.585741 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 00:59:53.585747 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 00:59:53.585756 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 00:59:53.585762 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 00:59:53.585768 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 00:59:53.585773 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 00:59:53.585779 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 00:59:53.585784 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 00:59:53.585805 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 00:59:53.585811 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 00:59:53.585817 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 00:59:53.585825 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 00:59:53.585831 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 00:59:53.585836 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 00:59:53.585842 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 00:59:53.585847 kernel: iommu: Default domain type: Translated Apr 16 00:59:53.585853 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 00:59:53.585859 kernel: PCI: Using ACPI for IRQ routing Apr 16 00:59:53.585864 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 00:59:53.585870 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 00:59:53.585877 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 00:59:53.585943 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 00:59:53.586004 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 00:59:53.586063 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 00:59:53.586071 kernel: vgaarb: loaded Apr 16 00:59:53.586076 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 00:59:53.586082 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 00:59:53.586088 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 00:59:53.586093 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 00:59:53.586102 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 00:59:53.586107 kernel: pnp: PnP ACPI init Apr 16 00:59:53.586762 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 00:59:53.586776 kernel: pnp: PnP ACPI: found 6 devices Apr 16 00:59:53.586782 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 00:59:53.586788 kernel: NET: Registered PF_INET protocol family Apr 16 00:59:53.586794 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 00:59:53.586800 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 00:59:53.586810 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 00:59:53.586816 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 00:59:53.586821 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 00:59:53.586827 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 00:59:53.586833 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 00:59:53.586839 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 00:59:53.586847 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 00:59:53.586853 kernel: NET: Registered PF_XDP protocol family Apr 16 00:59:53.586919 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 00:59:53.586994 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 00:59:53.587051 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 00:59:53.587106 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 00:59:53.587178 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 00:59:53.587236 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 00:59:53.587243 kernel: PCI: CLS 0 bytes, default 64 Apr 16 00:59:53.587249 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 00:59:53.587255 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 00:59:53.587263 kernel: Initialise system trusted keyrings Apr 16 00:59:53.587269 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 00:59:53.587275 kernel: Key type asymmetric registered Apr 16 00:59:53.587280 kernel: Asymmetric key parser 'x509' registered Apr 16 00:59:53.587286 kernel: hrtimer: interrupt took 3620881 ns Apr 16 00:59:53.587292 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 00:59:53.587298 kernel: io scheduler mq-deadline registered Apr 16 00:59:53.587303 kernel: io scheduler kyber registered Apr 16 00:59:53.587309 kernel: io scheduler bfq registered Apr 16 00:59:53.587317 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 00:59:53.587323 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 00:59:53.587329 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 00:59:53.587334 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 00:59:53.587340 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 00:59:53.587346 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 00:59:53.587351 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 00:59:53.587357 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 00:59:53.587363 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 00:59:53.587456 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 00:59:53.587469 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 00:59:53.587556 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 00:59:53.587692 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T00:59:52 UTC (1776301192) Apr 16 00:59:53.587784 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 00:59:53.587793 kernel: intel_pstate: CPU model not supported Apr 16 00:59:53.587799 kernel: NET: Registered PF_INET6 protocol family Apr 16 00:59:53.587805 kernel: Segment Routing with IPv6 Apr 16 00:59:53.587815 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 00:59:53.587821 kernel: NET: Registered PF_PACKET protocol family Apr 16 00:59:53.587827 kernel: Key type dns_resolver registered Apr 16 00:59:53.587833 kernel: IPI shorthand broadcast: enabled Apr 16 00:59:53.587838 kernel: sched_clock: Marking stable (3717035682, 662879215)->(4799919876, -420004979) Apr 16 00:59:53.587844 kernel: registered taskstats version 1 Apr 16 00:59:53.587850 kernel: Loading compiled-in X.509 certificates Apr 16 00:59:53.587856 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 00:59:53.587862 kernel: Key type .fscrypt registered Apr 16 00:59:53.587870 kernel: Key type fscrypt-provisioning registered Apr 16 00:59:53.587875 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 00:59:53.587881 kernel: ima: Allocated hash algorithm: sha1 Apr 16 00:59:53.587886 kernel: ima: No architecture policies found Apr 16 00:59:53.587892 kernel: clk: Disabling unused clocks Apr 16 00:59:53.587898 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 00:59:53.587903 kernel: Write protecting the kernel read-only data: 36864k Apr 16 00:59:53.587909 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 00:59:53.587914 kernel: Run /init as init process Apr 16 00:59:53.587922 kernel: with arguments: Apr 16 00:59:53.587927 kernel: /init Apr 16 00:59:53.587933 kernel: with environment: Apr 16 00:59:53.587938 kernel: HOME=/ Apr 16 00:59:53.587943 kernel: TERM=linux Apr 16 00:59:53.587951 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 00:59:53.587959 systemd[1]: Detected virtualization kvm. Apr 16 00:59:53.587966 systemd[1]: Detected architecture x86-64. Apr 16 00:59:53.587973 systemd[1]: Running in initrd. Apr 16 00:59:53.587979 systemd[1]: No hostname configured, using default hostname. Apr 16 00:59:53.587985 systemd[1]: Hostname set to . Apr 16 00:59:53.587991 systemd[1]: Initializing machine ID from VM UUID. Apr 16 00:59:53.587997 systemd[1]: Queued start job for default target initrd.target. Apr 16 00:59:53.588003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:59:53.588009 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:59:53.588016 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 00:59:53.588024 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 00:59:53.588030 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 00:59:53.588044 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 00:59:53.588053 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 00:59:53.588061 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 00:59:53.588067 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:59:53.588074 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:59:53.588080 systemd[1]: Reached target paths.target - Path Units. Apr 16 00:59:53.588086 systemd[1]: Reached target slices.target - Slice Units. Apr 16 00:59:53.588092 systemd[1]: Reached target swap.target - Swaps. Apr 16 00:59:53.588098 systemd[1]: Reached target timers.target - Timer Units. Apr 16 00:59:53.588104 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 00:59:53.588110 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 00:59:53.588118 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 00:59:53.588124 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 00:59:53.588131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:59:53.588137 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 00:59:53.588143 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:59:53.588149 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 00:59:53.588178 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 00:59:53.588185 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 00:59:53.588192 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 00:59:53.588200 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 00:59:53.588206 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 00:59:53.588212 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 00:59:53.588218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:59:53.588225 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 00:59:53.588246 systemd-journald[192]: Collecting audit messages is disabled. Apr 16 00:59:53.588265 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:59:53.588272 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 00:59:53.588296 systemd-journald[192]: Journal started Apr 16 00:59:53.588313 systemd-journald[192]: Runtime Journal (/run/log/journal/827e667a6b37450da7a1891c63297d88) is 6.0M, max 48.4M, 42.3M free. Apr 16 00:59:53.592741 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 00:59:53.603959 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 00:59:53.611366 systemd-modules-load[195]: Inserted module 'overlay' Apr 16 00:59:53.672955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 00:59:53.702796 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 00:59:53.706365 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 16 00:59:53.859545 kernel: Bridge firewalling registered Apr 16 00:59:53.838962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 00:59:53.839448 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 00:59:53.839804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:59:53.844297 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:59:53.945079 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 00:59:54.026835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 00:59:54.039563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 00:59:54.184215 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:59:54.208533 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 00:59:54.221584 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:59:54.234334 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:59:54.337114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 00:59:54.344884 dracut-cmdline[226]: dracut-dracut-053 Apr 16 00:59:54.367764 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 00:59:54.562034 systemd-resolved[231]: Positive Trust Anchors: Apr 16 00:59:54.563698 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 00:59:54.563741 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 00:59:54.585783 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 16 00:59:54.598081 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 00:59:54.625251 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:59:54.899385 kernel: SCSI subsystem initialized Apr 16 00:59:54.949602 kernel: Loading iSCSI transport class v2.0-870. Apr 16 00:59:55.027442 kernel: iscsi: registered transport (tcp) Apr 16 00:59:55.148914 kernel: iscsi: registered transport (qla4xxx) Apr 16 00:59:55.152269 kernel: QLogic iSCSI HBA Driver Apr 16 00:59:55.839340 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 00:59:55.891815 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 00:59:56.792978 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 00:59:56.793437 kernel: device-mapper: uevent: version 1.0.3 Apr 16 00:59:56.814093 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 00:59:57.166724 kernel: raid6: avx512x4 gen() 21631 MB/s Apr 16 00:59:57.187127 kernel: raid6: avx512x2 gen() 19702 MB/s Apr 16 00:59:57.225070 kernel: raid6: avx512x1 gen() 18314 MB/s Apr 16 00:59:57.254894 kernel: raid6: avx2x4 gen() 6290 MB/s Apr 16 00:59:57.278485 kernel: raid6: avx2x2 gen() 11421 MB/s Apr 16 00:59:57.297357 kernel: raid6: avx2x1 gen() 7829 MB/s Apr 16 00:59:57.297674 kernel: raid6: using algorithm avx512x4 gen() 21631 MB/s Apr 16 00:59:57.324776 kernel: raid6: .... xor() 3804 MB/s, rmw enabled Apr 16 00:59:57.325105 kernel: raid6: using avx512x2 recovery algorithm Apr 16 00:59:57.427781 kernel: xor: automatically using best checksumming function avx Apr 16 00:59:57.878564 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 00:59:58.011426 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 00:59:58.046444 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:59:58.093137 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 16 00:59:58.101014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:59:58.122918 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 00:59:58.166821 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Apr 16 00:59:59.294783 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 00:59:59.361807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 00:59:59.695459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:59:59.709348 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 00:59:59.779145 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 00:59:59.785788 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 00:59:59.803455 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 00:59:59.789080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:59:59.789261 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 00:59:59.820162 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 00:59:59.856664 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 00:59:59.879200 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 00:59:59.893554 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 00:59:59.879303 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:59:59.896580 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 00:59:59.899157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 00:59:59.907357 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:59:59.917560 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:59:59.941276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:59:59.953217 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 00:59:59.953272 kernel: GPT:9289727 != 19775487 Apr 16 00:59:59.953358 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 00:59:59.953372 kernel: GPT:9289727 != 19775487 Apr 16 00:59:59.953396 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 00:59:59.953408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 00:59:59.958967 kernel: AVX2 version of gcm_enc/dec engaged. Apr 16 00:59:59.959125 kernel: libata version 3.00 loaded. Apr 16 00:59:59.961653 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 00:59:59.973920 kernel: AES CTR mode by8 optimization enabled Apr 16 00:59:59.982719 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 00:59:59.986344 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 00:59:59.995695 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 01:00:00.000808 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 01:00:00.007164 kernel: scsi host0: ahci Apr 16 01:00:00.011343 kernel: scsi host1: ahci Apr 16 01:00:00.022206 kernel: scsi host2: ahci Apr 16 01:00:00.026242 kernel: scsi host3: ahci Apr 16 01:00:00.030693 kernel: scsi host4: ahci Apr 16 01:00:00.051926 kernel: scsi host5: ahci Apr 16 01:00:00.052327 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 16 01:00:00.052344 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 16 01:00:00.052752 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 16 01:00:00.052804 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Apr 16 01:00:00.053281 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 16 01:00:00.053300 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 16 01:00:00.053312 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 16 01:00:00.079777 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Apr 16 01:00:00.136744 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 01:00:00.469411 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 01:00:00.469449 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 01:00:00.470162 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 01:00:00.470266 kernel: ata3.00: applying bridge limits Apr 16 01:00:00.470300 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 01:00:00.470311 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 01:00:00.470336 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 01:00:00.470349 kernel: ata3.00: configured for UDMA/100 Apr 16 01:00:00.470362 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 01:00:00.470373 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 01:00:00.497940 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 01:00:00.571010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:00:00.676170 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 01:00:00.680226 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 01:00:00.718457 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 01:00:00.744043 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 01:00:00.758054 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 01:00:00.774957 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 01:00:00.886945 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 01:00:00.917957 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:00:00.946850 disk-uuid[566]: Primary Header is updated. Apr 16 01:00:00.946850 disk-uuid[566]: Secondary Entries is updated. Apr 16 01:00:00.946850 disk-uuid[566]: Secondary Header is updated. Apr 16 01:00:00.959862 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:00:01.080864 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:00:01.105424 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:00:02.097943 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:00:02.124394 disk-uuid[567]: The operation has completed successfully. Apr 16 01:00:02.409527 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 01:00:02.409746 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 01:00:02.556871 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 01:00:03.046821 sh[590]: Success Apr 16 01:00:03.673263 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 16 01:00:04.880869 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 01:00:04.929966 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 01:00:05.040138 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 01:00:05.286145 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 01:00:05.286416 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:00:05.291287 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 01:00:05.291655 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 01:00:05.293158 kernel: BTRFS info (device dm-0): using free space tree Apr 16 01:00:05.425353 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 01:00:05.442276 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 01:00:05.511932 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 01:00:05.587535 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 01:00:05.751127 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:00:05.759707 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:00:05.760061 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:00:05.845846 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:00:06.291876 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:00:06.290283 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 01:00:06.521333 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 01:00:06.565789 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 01:00:09.201346 ignition[681]: Ignition 2.19.0 Apr 16 01:00:09.241198 ignition[681]: Stage: fetch-offline Apr 16 01:00:09.244595 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:00:09.246399 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:00:09.250805 ignition[681]: parsed url from cmdline: "" Apr 16 01:00:09.250811 ignition[681]: no config URL provided Apr 16 01:00:09.250821 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 01:00:09.250835 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 16 01:00:09.250995 ignition[681]: op(1): [started] loading QEMU firmware config module Apr 16 01:00:09.251001 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 01:00:09.671603 ignition[681]: op(1): [finished] loading QEMU firmware config module Apr 16 01:00:09.927883 ignition[681]: parsing config with SHA512: 96e8e377e71cf72253b6af15d45ddbb97ae57309b8ecedb4882fdd628bd1db47fcaa88e77ebd761ec107adebd511d3ffe1c2e6bd4b1978bc9c5b525332625fce Apr 16 01:00:10.093075 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 01:00:10.159117 unknown[681]: fetched base config from "system" Apr 16 01:00:10.159132 unknown[681]: fetched user config from "qemu" Apr 16 01:00:10.185937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 01:00:10.186692 ignition[681]: fetch-offline: fetch-offline passed Apr 16 01:00:10.186872 ignition[681]: Ignition finished successfully Apr 16 01:00:10.225582 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 01:00:11.652808 systemd-networkd[779]: lo: Link UP Apr 16 01:00:11.656919 systemd-networkd[779]: lo: Gained carrier Apr 16 01:00:11.858586 systemd-networkd[779]: Enumeration completed Apr 16 01:00:11.876398 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 01:00:11.927168 systemd[1]: Reached target network.target - Network. Apr 16 01:00:11.953899 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 01:00:11.954175 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:00:11.954179 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 01:00:11.980305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 01:00:11.990724 systemd-networkd[779]: eth0: Link UP Apr 16 01:00:11.990729 systemd-networkd[779]: eth0: Gained carrier Apr 16 01:00:11.990744 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:00:12.134845 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 01:00:13.298471 ignition[782]: Ignition 2.19.0 Apr 16 01:00:13.299288 ignition[782]: Stage: kargs Apr 16 01:00:13.299559 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:00:13.299571 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:00:13.329903 ignition[782]: kargs: kargs passed Apr 16 01:00:13.378495 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 01:00:13.330787 ignition[782]: Ignition finished successfully Apr 16 01:00:13.500154 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 01:00:13.728380 systemd-networkd[779]: eth0: Gained IPv6LL Apr 16 01:00:17.549654 ignition[791]: Ignition 2.19.0 Apr 16 01:00:17.558254 ignition[791]: Stage: disks Apr 16 01:00:17.575293 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:00:17.575726 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:00:17.584901 ignition[791]: disks: disks passed Apr 16 01:00:17.598282 ignition[791]: Ignition finished successfully Apr 16 01:00:17.670980 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 01:00:17.746159 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 01:00:17.775526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 01:00:17.855339 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 01:00:17.865734 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 01:00:17.880183 systemd[1]: Reached target basic.target - Basic System. Apr 16 01:00:18.096559 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 01:00:19.841289 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 16 01:00:19.960391 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 01:00:19.989157 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 01:00:24.274932 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 01:00:24.285532 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 01:00:24.328093 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 01:00:24.412468 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 01:00:24.418753 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 01:00:24.421233 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 01:00:24.423174 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 01:00:24.423270 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 01:00:24.547790 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 01:00:24.557527 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Apr 16 01:00:24.569470 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:00:24.569557 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:00:24.569572 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:00:24.585987 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 01:00:24.679252 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:00:24.682709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 01:00:25.286150 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 01:00:25.410057 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Apr 16 01:00:25.434841 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 01:00:25.472346 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 01:00:41.168039 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 01:00:41.580588 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 01:00:41.659854 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 01:00:41.976524 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:00:41.973325 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 01:00:42.664266 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 01:00:46.134493 ignition[925]: INFO : Ignition 2.19.0 Apr 16 01:00:46.134493 ignition[925]: INFO : Stage: mount Apr 16 01:00:46.163580 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:00:46.163580 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:00:46.344764 ignition[925]: INFO : mount: mount passed Apr 16 01:00:46.345413 ignition[925]: INFO : Ignition finished successfully Apr 16 01:00:46.372196 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 01:00:46.813088 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 01:00:49.186564 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 01:00:50.183555 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Apr 16 01:00:50.190368 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:00:50.190722 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:00:50.195009 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:00:50.258390 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:00:50.383370 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 01:00:52.697513 ignition[956]: INFO : Ignition 2.19.0 Apr 16 01:00:52.697513 ignition[956]: INFO : Stage: files Apr 16 01:00:52.711238 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:00:52.711238 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:00:52.727851 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 16 01:00:52.750024 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 01:00:52.756239 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 01:00:52.919195 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 01:00:52.926354 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 01:00:52.989530 unknown[956]: wrote ssh authorized keys file for user: core Apr 16 01:00:52.995763 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 01:00:53.144118 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 01:00:53.154719 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 01:00:53.668119 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 01:00:56.363532 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 01:00:56.363532 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 01:00:56.387957 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 16 01:00:57.188854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 01:01:02.594064 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 01:01:02.594064 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 01:01:02.642983 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 01:01:02.648995 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 01:01:02.648995 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 01:01:02.648995 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 01:01:02.664475 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 01:01:02.664475 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 01:01:02.664475 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 01:01:02.664475 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 01:01:02.795690 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 01:01:02.866942 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 01:01:02.880230 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 01:01:02.884654 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 16 01:01:02.889362 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 01:01:02.895121 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 01:01:02.909120 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 01:01:02.909120 ignition[956]: INFO : files: files passed Apr 16 01:01:02.909120 ignition[956]: INFO : Ignition finished successfully Apr 16 01:01:02.918846 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 01:01:02.935310 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 01:01:02.956967 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 01:01:02.966834 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 01:01:02.967021 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 01:01:02.984859 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 01:01:03.005997 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:01:03.005997 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:01:03.024104 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:01:03.066413 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 01:01:03.082834 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 01:01:03.167384 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 01:01:03.344640 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 01:01:03.346141 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 01:01:03.355887 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 01:01:03.365176 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 01:01:03.370131 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 01:01:03.393356 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 01:01:03.552260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 01:01:03.594848 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 01:01:03.656600 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:01:03.660916 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:01:03.661322 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 01:01:03.661713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 01:01:03.661988 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 01:01:03.663113 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 01:01:03.663482 systemd[1]: Stopped target basic.target - Basic System. Apr 16 01:01:03.671768 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 01:01:03.672489 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 01:01:03.680355 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 01:01:03.685061 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 01:01:03.686264 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 01:01:03.687452 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 01:01:03.692129 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 01:01:03.692854 systemd[1]: Stopped target swap.target - Swaps. Apr 16 01:01:03.695374 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 01:01:03.696326 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 01:01:03.770064 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:01:03.784653 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:01:03.792253 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 01:01:03.794294 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:01:03.802495 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 01:01:03.802997 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 01:01:03.824426 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 01:01:03.826311 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 01:01:03.837509 systemd[1]: Stopped target paths.target - Path Units. Apr 16 01:01:03.846031 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 01:01:03.853319 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:01:03.854236 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 01:01:03.876401 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 01:01:03.885603 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 01:01:03.885803 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 01:01:03.886015 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 01:01:03.886077 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 01:01:03.898963 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 01:01:03.899374 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 01:01:03.909042 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 01:01:03.909262 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 01:01:03.934874 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 01:01:03.940933 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 01:01:03.942106 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 01:01:03.942279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:01:03.949034 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 01:01:03.949180 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 01:01:04.080832 ignition[1011]: INFO : Ignition 2.19.0 Apr 16 01:01:04.080832 ignition[1011]: INFO : Stage: umount Apr 16 01:01:04.080832 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:01:04.080832 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:01:04.076209 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 01:01:04.103647 ignition[1011]: INFO : umount: umount passed Apr 16 01:01:04.103647 ignition[1011]: INFO : Ignition finished successfully Apr 16 01:01:04.081184 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 01:01:04.081413 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 01:01:04.098034 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 01:01:04.098204 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 01:01:04.113960 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 01:01:04.114186 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 01:01:04.133797 systemd[1]: Stopped target network.target - Network. Apr 16 01:01:04.139580 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 01:01:04.139966 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 01:01:04.159797 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 01:01:04.161954 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 01:01:04.174502 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 01:01:04.177526 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 01:01:04.192310 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 01:01:04.194572 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 01:01:04.208261 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 01:01:04.208728 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 01:01:04.224325 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 01:01:04.233423 systemd-networkd[779]: eth0: DHCPv6 lease lost Apr 16 01:01:04.233453 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 01:01:04.236957 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 01:01:04.241318 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 01:01:04.261567 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 01:01:04.266365 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 01:01:04.361967 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 01:01:04.362073 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:01:04.412255 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 01:01:04.427847 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 01:01:04.433535 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 01:01:04.446834 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 01:01:04.449801 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:01:04.450054 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 01:01:04.450131 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 01:01:04.467313 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 01:01:04.467746 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:01:04.486727 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:01:04.517879 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 01:01:04.518072 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:01:04.521583 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 01:01:04.526781 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 01:01:04.539161 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 01:01:04.540647 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:01:04.547243 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 01:01:04.547564 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 01:01:04.557045 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 01:01:04.560407 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 01:01:04.566756 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 01:01:04.567042 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:01:04.671203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 01:01:04.679817 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 01:01:04.680130 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:01:04.695678 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 01:01:04.695981 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:01:04.703356 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 01:01:04.703726 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:01:04.718939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:01:04.720319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:01:04.749763 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 01:01:04.762789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 01:01:04.787596 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 01:01:04.787814 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 01:01:04.821576 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 01:01:04.848993 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 01:01:05.070359 systemd[1]: Switching root. Apr 16 01:01:05.299437 systemd-journald[192]: Journal stopped Apr 16 01:01:10.351334 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Apr 16 01:01:10.351382 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 01:01:10.351393 kernel: SELinux: policy capability open_perms=1 Apr 16 01:01:10.351420 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 01:01:10.351428 kernel: SELinux: policy capability always_check_network=0 Apr 16 01:01:10.351436 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 01:01:10.351445 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 01:01:10.351452 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 01:01:10.351460 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 01:01:10.351468 kernel: audit: type=1403 audit(1776301265.713:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 01:01:10.351479 systemd[1]: Successfully loaded SELinux policy in 96.492ms. Apr 16 01:01:10.351520 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 353.450ms. Apr 16 01:01:10.351544 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 01:01:10.351553 systemd[1]: Detected virtualization kvm. Apr 16 01:01:10.351565 systemd[1]: Detected architecture x86-64. Apr 16 01:01:10.351574 systemd[1]: Detected first boot. Apr 16 01:01:10.351582 systemd[1]: Initializing machine ID from VM UUID. Apr 16 01:01:10.351590 zram_generator::config[1056]: No configuration found. Apr 16 01:01:10.351600 systemd[1]: Populated /etc with preset unit settings. Apr 16 01:01:10.351640 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 01:01:10.351664 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 01:01:10.351686 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 01:01:10.351707 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 01:01:10.351716 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 01:01:10.351728 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 01:01:10.351736 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 01:01:10.351745 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 01:01:10.351754 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 01:01:10.351762 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 01:01:10.354204 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 01:01:10.354522 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:01:10.354533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:01:10.354542 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 01:01:10.354551 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 01:01:10.354559 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 01:01:10.354653 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 01:01:10.354664 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 01:01:10.354672 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:01:10.354697 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 01:01:10.354706 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 01:01:10.354714 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 01:01:10.354723 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 01:01:10.354734 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:01:10.354745 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 01:01:10.354754 systemd[1]: Reached target slices.target - Slice Units. Apr 16 01:01:10.354777 systemd[1]: Reached target swap.target - Swaps. Apr 16 01:01:10.354786 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 01:01:10.354794 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 01:01:10.354802 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:01:10.354810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 01:01:10.354817 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:01:10.354825 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 01:01:10.354833 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 01:01:10.354841 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 01:01:10.354849 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 01:01:10.354873 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:01:10.354881 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 01:01:10.354889 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 01:01:10.354898 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 01:01:10.354906 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 01:01:10.354915 systemd[1]: Reached target machines.target - Containers. Apr 16 01:01:10.354923 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 01:01:10.354931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:01:10.354955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 01:01:10.354963 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 01:01:10.354971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:01:10.354991 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 01:01:10.355000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:01:10.355008 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 01:01:10.355016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:01:10.355024 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 01:01:10.355045 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 01:01:10.355054 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 01:01:10.355062 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 01:01:10.355070 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 01:01:10.355078 kernel: fuse: init (API version 7.39) Apr 16 01:01:10.355087 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 01:01:10.355095 kernel: ACPI: bus type drm_connector registered Apr 16 01:01:10.355103 kernel: loop: module loaded Apr 16 01:01:10.355110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 01:01:10.355132 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 01:01:10.355164 systemd-journald[1140]: Collecting audit messages is disabled. Apr 16 01:01:10.355183 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 01:01:10.355192 systemd-journald[1140]: Journal started Apr 16 01:01:10.355212 systemd-journald[1140]: Runtime Journal (/run/log/journal/827e667a6b37450da7a1891c63297d88) is 6.0M, max 48.4M, 42.3M free. Apr 16 01:01:08.970750 systemd[1]: Queued start job for default target multi-user.target. Apr 16 01:01:09.056030 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 01:01:09.066439 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 01:01:09.067209 systemd[1]: systemd-journald.service: Consumed 1.703s CPU time. Apr 16 01:01:10.374738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 01:01:10.388873 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 01:01:10.391695 systemd[1]: Stopped verity-setup.service. Apr 16 01:01:10.472871 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:01:10.501133 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 01:01:10.511254 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 01:01:10.514729 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 01:01:10.517029 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 01:01:10.521528 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 01:01:10.528531 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 01:01:10.533686 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 01:01:10.540092 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 01:01:10.543201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:01:10.551155 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 01:01:10.553358 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 01:01:10.559469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:01:10.559739 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:01:10.565886 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 01:01:10.566126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 01:01:10.573187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:01:10.576902 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:01:10.584167 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 01:01:10.584401 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 01:01:10.587324 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:01:10.587513 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:01:10.592171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 01:01:10.601005 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 01:01:10.609753 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 01:01:10.710942 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 01:01:10.735414 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 01:01:10.745690 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 01:01:10.754962 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 01:01:10.755235 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 01:01:10.771158 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 01:01:10.850791 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 01:01:10.900539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 01:01:10.910309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:01:10.939459 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 01:01:10.947024 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 01:01:10.950161 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 01:01:10.951944 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 01:01:10.954911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 01:01:10.966457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 01:01:10.998904 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 01:01:11.014711 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 01:01:11.034659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:01:11.039402 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 01:01:11.045233 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 01:01:11.071286 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 01:01:11.076289 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 01:01:11.111195 kernel: loop0: detected capacity change from 0 to 140768 Apr 16 01:01:11.175711 systemd-journald[1140]: Time spent on flushing to /var/log/journal/827e667a6b37450da7a1891c63297d88 is 32.208ms for 960 entries. Apr 16 01:01:11.175711 systemd-journald[1140]: System Journal (/var/log/journal/827e667a6b37450da7a1891c63297d88) is 8.0M, max 195.6M, 187.6M free. Apr 16 01:01:11.254155 systemd-journald[1140]: Received client request to flush runtime journal. Apr 16 01:01:11.221319 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 01:01:11.241482 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 01:01:11.276458 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 01:01:11.281002 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 01:01:11.299186 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:01:11.322686 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 01:01:11.377017 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Apr 16 01:01:11.377042 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Apr 16 01:01:11.451689 kernel: loop1: detected capacity change from 0 to 142488 Apr 16 01:01:11.461886 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 16 01:01:11.471831 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:01:11.514978 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 01:01:11.520471 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 01:01:11.525317 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 01:01:11.573684 kernel: loop2: detected capacity change from 0 to 219192 Apr 16 01:01:11.664690 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 01:01:11.689037 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 01:01:11.746705 kernel: loop3: detected capacity change from 0 to 140768 Apr 16 01:01:11.809116 kernel: loop4: detected capacity change from 0 to 142488 Apr 16 01:01:11.892100 kernel: loop5: detected capacity change from 0 to 219192 Apr 16 01:01:11.975307 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 16 01:01:11.975329 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 16 01:01:12.001850 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 01:01:12.009591 (sd-merge)[1196]: Merged extensions into '/usr'. Apr 16 01:01:12.040347 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:01:12.066398 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 01:01:12.066460 systemd[1]: Reloading... Apr 16 01:01:12.514672 zram_generator::config[1223]: No configuration found. Apr 16 01:01:12.614657 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 01:01:13.522552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:01:14.126395 systemd[1]: Reloading finished in 2059 ms. Apr 16 01:01:14.952704 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 01:01:14.995738 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 01:01:15.136164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 01:01:15.222322 systemd[1]: Starting ensure-sysext.service... Apr 16 01:01:15.226856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 01:01:15.234232 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:01:15.240030 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Apr 16 01:01:15.240041 systemd[1]: Reloading... Apr 16 01:01:15.267545 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 01:01:15.267833 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 01:01:15.268367 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 01:01:15.272795 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 16 01:01:15.272865 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 16 01:01:15.282942 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 01:01:15.282959 systemd-tmpfiles[1263]: Skipping /boot Apr 16 01:01:15.296736 systemd-udevd[1264]: Using default interface naming scheme 'v255'. Apr 16 01:01:15.298271 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 01:01:15.298280 systemd-tmpfiles[1263]: Skipping /boot Apr 16 01:01:15.401698 zram_generator::config[1290]: No configuration found. Apr 16 01:01:15.571559 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1339) Apr 16 01:01:15.596199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:01:15.647738 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 16 01:01:15.656429 kernel: ACPI: button: Power Button [PWRF] Apr 16 01:01:15.801934 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 01:01:15.814036 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 01:01:15.816450 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 01:01:15.860736 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 01:01:15.996149 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 01:01:16.056088 systemd[1]: Reloading finished in 815 ms. Apr 16 01:01:16.166059 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:01:16.191797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:01:17.262403 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 01:01:17.787994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 01:01:17.816264 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 01:01:17.856320 systemd[1]: Finished ensure-sysext.service. Apr 16 01:01:17.865135 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:01:17.883476 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 01:01:17.889124 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 01:01:17.891474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:01:17.892713 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 01:01:17.902818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:01:17.910787 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 01:01:17.919911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:01:17.925232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:01:17.929165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:01:17.934303 lvm[1361]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 01:01:17.934194 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 01:01:17.944278 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 01:01:17.955491 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 01:01:17.966042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 01:01:17.984223 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 01:01:18.002147 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 01:01:18.009805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:01:18.015256 augenrules[1388]: No rules Apr 16 01:01:18.018598 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:01:18.026875 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 01:01:18.039338 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 01:01:18.047038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:01:18.047215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:01:18.051789 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 01:01:18.052979 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 01:01:18.064427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:01:18.064837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:01:18.074801 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 01:01:18.082060 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:01:18.085332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:01:18.092988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 01:01:18.093438 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 01:01:18.117542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:01:18.131095 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 01:01:18.135310 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 01:01:18.137764 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 01:01:18.163774 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 01:01:18.165931 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 01:01:18.166024 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 01:01:18.166711 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 01:01:18.187722 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 01:01:18.212485 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 01:01:18.241672 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 01:01:18.677578 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 01:01:18.691500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:01:18.963296 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 01:01:18.966228 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 01:01:18.972671 systemd-resolved[1379]: Positive Trust Anchors: Apr 16 01:01:18.972981 systemd-resolved[1379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 01:01:18.973036 systemd-resolved[1379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 01:01:18.974993 systemd-networkd[1376]: lo: Link UP Apr 16 01:01:18.975023 systemd-networkd[1376]: lo: Gained carrier Apr 16 01:01:18.976138 systemd-networkd[1376]: Enumeration completed Apr 16 01:01:18.976332 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 01:01:18.976977 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:01:18.978965 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 01:01:18.981493 systemd-resolved[1379]: Defaulting to hostname 'linux'. Apr 16 01:01:18.982089 systemd-networkd[1376]: eth0: Link UP Apr 16 01:01:18.982094 systemd-networkd[1376]: eth0: Gained carrier Apr 16 01:01:18.982117 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:01:18.997852 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 01:01:19.001245 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 01:01:19.001813 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 01:01:19.002681 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 16 01:01:20.068981 systemd-resolved[1379]: Clock change detected. Flushing caches. Apr 16 01:01:20.071996 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 01:01:20.073584 systemd-timesyncd[1382]: Initial clock synchronization to Thu 2026-04-16 01:01:20.068920 UTC. Apr 16 01:01:20.073827 systemd[1]: Reached target network.target - Network. Apr 16 01:01:20.080488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:01:20.083909 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 01:01:20.089361 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 01:01:20.089503 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 01:01:20.090172 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 01:01:20.090509 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 01:01:20.090762 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 01:01:20.091637 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 01:01:20.091670 systemd[1]: Reached target paths.target - Path Units. Apr 16 01:01:20.091931 systemd[1]: Reached target timers.target - Timer Units. Apr 16 01:01:20.097755 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 01:01:20.103251 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 01:01:20.124575 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 01:01:20.129065 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 01:01:20.142945 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 01:01:20.148606 systemd[1]: Reached target basic.target - Basic System. Apr 16 01:01:20.154722 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 01:01:20.154917 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 01:01:20.177627 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 01:01:20.201795 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 01:01:20.239144 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 01:01:20.306677 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 01:01:20.315465 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 01:01:20.333537 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 01:01:20.340839 jq[1427]: false Apr 16 01:01:20.362706 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 01:01:20.377858 dbus-daemon[1426]: [system] SELinux support is enabled Apr 16 01:01:20.388668 extend-filesystems[1428]: Found loop3 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found loop4 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found loop5 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found sr0 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found vda Apr 16 01:01:20.388668 extend-filesystems[1428]: Found vda1 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found vda2 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found vda3 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found usr Apr 16 01:01:20.388668 extend-filesystems[1428]: Found vda4 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found vda6 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found vda7 Apr 16 01:01:20.388668 extend-filesystems[1428]: Found vda9 Apr 16 01:01:20.388668 extend-filesystems[1428]: Checking size of /dev/vda9 Apr 16 01:01:20.450289 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 01:01:20.450321 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1341) Apr 16 01:01:20.388497 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 01:01:20.450507 extend-filesystems[1428]: Resized partition /dev/vda9 Apr 16 01:01:20.399334 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 01:01:20.450861 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Apr 16 01:01:20.424259 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 01:01:20.429560 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 01:01:20.430327 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 01:01:20.433041 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 01:01:20.451936 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 01:01:20.452583 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 01:01:20.461522 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 01:01:20.461805 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 01:01:20.464005 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 01:01:20.470530 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 01:01:20.492651 jq[1447]: true Apr 16 01:01:20.493370 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 01:01:20.494885 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 01:01:20.531333 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 01:01:20.660444 update_engine[1445]: I20260416 01:01:20.642762 1445 main.cc:92] Flatcar Update Engine starting Apr 16 01:01:20.660444 update_engine[1445]: I20260416 01:01:20.647877 1445 update_check_scheduler.cc:74] Next update check in 5m32s Apr 16 01:01:20.666532 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 01:01:20.670977 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 01:01:20.670977 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 01:01:20.670977 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 01:01:20.695314 jq[1453]: true Apr 16 01:01:20.670732 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 01:01:20.695625 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Apr 16 01:01:20.680225 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Apr 16 01:01:20.680243 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 01:01:20.684494 systemd-logind[1443]: New seat seat0. Apr 16 01:01:20.705979 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 01:01:20.706007 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 01:01:20.785234 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Apr 16 01:01:20.790416 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 01:01:20.801525 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 01:01:20.809936 tar[1451]: linux-amd64/LICENSE Apr 16 01:01:20.815168 tar[1451]: linux-amd64/helm Apr 16 01:01:20.846862 systemd[1]: Started update-engine.service - Update Engine. Apr 16 01:01:20.945708 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 01:01:20.968073 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 01:01:20.968535 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 01:01:20.968627 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 01:01:20.976560 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 01:01:20.976737 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 01:01:21.050202 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 01:01:21.288963 systemd-networkd[1376]: eth0: Gained IPv6LL Apr 16 01:01:21.327621 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 01:01:21.344987 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 01:01:21.422250 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 01:01:21.423312 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 01:01:21.500329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:01:21.524174 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 01:01:21.720947 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 01:01:21.776242 containerd[1456]: time="2026-04-16T01:01:21.771856551Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 01:01:21.913661 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 01:01:21.913914 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 01:01:21.936739 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 01:01:22.036831 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 01:01:22.046475 containerd[1456]: time="2026-04-16T01:01:22.046233013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:01:22.052287 containerd[1456]: time="2026-04-16T01:01:22.052205389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:01:22.052287 containerd[1456]: time="2026-04-16T01:01:22.052276108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 01:01:22.052287 containerd[1456]: time="2026-04-16T01:01:22.052297641Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 01:01:22.052745 containerd[1456]: time="2026-04-16T01:01:22.052704822Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 01:01:22.052770 containerd[1456]: time="2026-04-16T01:01:22.052750813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 01:01:22.052839 containerd[1456]: time="2026-04-16T01:01:22.052811810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:01:22.052856 containerd[1456]: time="2026-04-16T01:01:22.052842019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:01:22.053428 containerd[1456]: time="2026-04-16T01:01:22.053400195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:01:22.053516 containerd[1456]: time="2026-04-16T01:01:22.053502231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 01:01:22.053588 containerd[1456]: time="2026-04-16T01:01:22.053575961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:01:22.053633 containerd[1456]: time="2026-04-16T01:01:22.053624319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 01:01:22.053769 containerd[1456]: time="2026-04-16T01:01:22.053756704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:01:22.054062 containerd[1456]: time="2026-04-16T01:01:22.054003599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:01:22.054317 containerd[1456]: time="2026-04-16T01:01:22.054298744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:01:22.054375 containerd[1456]: time="2026-04-16T01:01:22.054364017Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 01:01:22.054523 containerd[1456]: time="2026-04-16T01:01:22.054506104Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 01:01:22.054629 containerd[1456]: time="2026-04-16T01:01:22.054615309Z" level=info msg="metadata content store policy set" policy=shared Apr 16 01:01:22.075501 containerd[1456]: time="2026-04-16T01:01:22.071891979Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 01:01:22.075501 containerd[1456]: time="2026-04-16T01:01:22.075216952Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 01:01:22.075501 containerd[1456]: time="2026-04-16T01:01:22.075495786Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 01:01:22.075501 containerd[1456]: time="2026-04-16T01:01:22.075518054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 01:01:22.075501 containerd[1456]: time="2026-04-16T01:01:22.075545500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 01:01:22.084606 containerd[1456]: time="2026-04-16T01:01:22.075917979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 01:01:22.084606 containerd[1456]: time="2026-04-16T01:01:22.084173248Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 01:01:22.084725 containerd[1456]: time="2026-04-16T01:01:22.084674582Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 01:01:22.084725 containerd[1456]: time="2026-04-16T01:01:22.084702379Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 01:01:22.084725 containerd[1456]: time="2026-04-16T01:01:22.084717434Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 01:01:22.084794 containerd[1456]: time="2026-04-16T01:01:22.084732100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 01:01:22.084794 containerd[1456]: time="2026-04-16T01:01:22.084746791Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 01:01:22.084794 containerd[1456]: time="2026-04-16T01:01:22.084761146Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 01:01:22.084794 containerd[1456]: time="2026-04-16T01:01:22.084776930Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 01:01:22.084879 containerd[1456]: time="2026-04-16T01:01:22.084793657Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 01:01:22.084879 containerd[1456]: time="2026-04-16T01:01:22.084807656Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 01:01:22.084879 containerd[1456]: time="2026-04-16T01:01:22.084823501Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 01:01:22.084879 containerd[1456]: time="2026-04-16T01:01:22.084841499Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 01:01:22.084879 containerd[1456]: time="2026-04-16T01:01:22.084867321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.084977 containerd[1456]: time="2026-04-16T01:01:22.084882789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.084977 containerd[1456]: time="2026-04-16T01:01:22.084900136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.084977 containerd[1456]: time="2026-04-16T01:01:22.084916573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.084977 containerd[1456]: time="2026-04-16T01:01:22.084930125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.084977 containerd[1456]: time="2026-04-16T01:01:22.084945059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.084977 containerd[1456]: time="2026-04-16T01:01:22.084961729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085180 containerd[1456]: time="2026-04-16T01:01:22.084977300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085180 containerd[1456]: time="2026-04-16T01:01:22.084993481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085180 containerd[1456]: time="2026-04-16T01:01:22.085011340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085180 containerd[1456]: time="2026-04-16T01:01:22.085072022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085180 containerd[1456]: time="2026-04-16T01:01:22.085084805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085180 containerd[1456]: time="2026-04-16T01:01:22.085163288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085309 containerd[1456]: time="2026-04-16T01:01:22.085183414Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 01:01:22.085309 containerd[1456]: time="2026-04-16T01:01:22.085209599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085309 containerd[1456]: time="2026-04-16T01:01:22.085245365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085309 containerd[1456]: time="2026-04-16T01:01:22.085260377Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 01:01:22.085391 containerd[1456]: time="2026-04-16T01:01:22.085317328Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 01:01:22.085391 containerd[1456]: time="2026-04-16T01:01:22.085337432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 01:01:22.085391 containerd[1456]: time="2026-04-16T01:01:22.085350384Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 01:01:22.085391 containerd[1456]: time="2026-04-16T01:01:22.085364502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 01:01:22.085391 containerd[1456]: time="2026-04-16T01:01:22.085375100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085391 containerd[1456]: time="2026-04-16T01:01:22.085388500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 01:01:22.085514 containerd[1456]: time="2026-04-16T01:01:22.085401150Z" level=info msg="NRI interface is disabled by configuration." Apr 16 01:01:22.085514 containerd[1456]: time="2026-04-16T01:01:22.085412307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 01:01:22.085892 containerd[1456]: time="2026-04-16T01:01:22.085758012Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 01:01:22.085892 containerd[1456]: time="2026-04-16T01:01:22.085852568Z" level=info msg="Connect containerd service" Apr 16 01:01:22.086160 containerd[1456]: time="2026-04-16T01:01:22.085901317Z" level=info msg="using legacy CRI server" Apr 16 01:01:22.086160 containerd[1456]: time="2026-04-16T01:01:22.085913580Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 01:01:22.086215 containerd[1456]: time="2026-04-16T01:01:22.086156899Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.091493266Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.092817109Z" level=info msg="Start subscribing containerd event" Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.100905656Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.101181212Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.107657217Z" level=info msg="Start recovering state" Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.120591958Z" level=info msg="Start event monitor" Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.120719538Z" level=info msg="Start snapshots syncer" Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.120735974Z" level=info msg="Start cni network conf syncer for default" Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.120743594Z" level=info msg="Start streaming server" Apr 16 01:01:22.120929 containerd[1456]: time="2026-04-16T01:01:22.120917417Z" level=info msg="containerd successfully booted in 0.350426s" Apr 16 01:01:22.146932 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 01:01:22.227748 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 01:01:22.287059 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 01:01:22.311919 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:45992.service - OpenSSH per-connection server daemon (10.0.0.1:45992). Apr 16 01:01:22.344307 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 01:01:22.344540 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 01:01:22.389215 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 01:01:22.603063 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 01:01:22.641969 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 01:01:22.692593 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 01:01:22.696907 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 01:01:22.716319 sshd[1523]: Accepted publickey for core from 10.0.0.1 port 45992 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:01:22.740906 sshd[1523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:01:22.930814 systemd-logind[1443]: New session 1 of user core. Apr 16 01:01:22.956442 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 01:01:23.020749 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 01:01:23.384801 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 01:01:23.438464 tar[1451]: linux-amd64/README.md Apr 16 01:01:23.605509 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 01:01:23.612090 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 01:01:23.683141 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 01:01:24.498877 systemd[1536]: Queued start job for default target default.target. Apr 16 01:01:24.509204 systemd[1536]: Created slice app.slice - User Application Slice. Apr 16 01:01:24.509257 systemd[1536]: Reached target paths.target - Paths. Apr 16 01:01:24.509275 systemd[1536]: Reached target timers.target - Timers. Apr 16 01:01:24.526017 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 01:01:24.614678 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 01:01:24.620880 systemd[1536]: Reached target sockets.target - Sockets. Apr 16 01:01:24.624944 systemd[1536]: Reached target basic.target - Basic System. Apr 16 01:01:24.625138 systemd[1536]: Reached target default.target - Main User Target. Apr 16 01:01:24.625223 systemd[1536]: Startup finished in 912ms. Apr 16 01:01:24.641932 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 01:01:24.784497 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 01:01:24.899940 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:46008.service - OpenSSH per-connection server daemon (10.0.0.1:46008). Apr 16 01:01:25.032486 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 46008 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:01:25.038073 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:01:25.162515 systemd-logind[1443]: New session 2 of user core. Apr 16 01:01:25.185426 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 01:01:25.212888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:01:25.231582 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 01:01:25.239911 systemd[1]: Startup finished in 4.041s (kernel) + 1min 12.754s (initrd) + 18.550s (userspace) = 1min 35.346s. Apr 16 01:01:25.283407 sshd[1548]: pam_unix(sshd:session): session closed for user core Apr 16 01:01:25.295725 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:01:25.320175 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:46008.service: Deactivated successfully. Apr 16 01:01:25.331868 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 01:01:25.333509 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Apr 16 01:01:25.423501 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:36252.service - OpenSSH per-connection server daemon (10.0.0.1:36252). Apr 16 01:01:25.429558 systemd-logind[1443]: Removed session 2. Apr 16 01:01:25.601084 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 36252 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:01:25.611516 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:01:25.713010 systemd-logind[1443]: New session 3 of user core. Apr 16 01:01:25.781896 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 01:01:25.955228 sshd[1564]: pam_unix(sshd:session): session closed for user core Apr 16 01:01:25.977528 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:36252.service: Deactivated successfully. Apr 16 01:01:25.979138 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 01:01:25.980534 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Apr 16 01:01:26.003567 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:36264.service - OpenSSH per-connection server daemon (10.0.0.1:36264). Apr 16 01:01:26.004662 systemd-logind[1443]: Removed session 3. Apr 16 01:01:26.356694 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 36264 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:01:26.362666 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:01:26.396566 systemd-logind[1443]: New session 4 of user core. Apr 16 01:01:26.426768 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 01:01:26.619913 sshd[1577]: pam_unix(sshd:session): session closed for user core Apr 16 01:01:26.639020 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:36264.service: Deactivated successfully. Apr 16 01:01:26.657740 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 01:01:26.762329 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Apr 16 01:01:26.811479 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:36268.service - OpenSSH per-connection server daemon (10.0.0.1:36268). Apr 16 01:01:26.832769 systemd-logind[1443]: Removed session 4. Apr 16 01:01:27.030668 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 36268 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:01:27.032303 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:01:27.167198 systemd-logind[1443]: New session 5 of user core. Apr 16 01:01:27.192525 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 01:01:27.335986 kubelet[1556]: E0416 01:01:27.335559 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:01:27.343971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:01:27.347008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:01:27.360725 systemd[1]: kubelet.service: Consumed 2.282s CPU time. Apr 16 01:01:27.465886 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 01:01:27.466208 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:01:29.834788 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 01:01:30.044409 (dockerd)[1610]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 01:01:31.706514 dockerd[1610]: time="2026-04-16T01:01:31.706051438Z" level=info msg="Starting up" Apr 16 01:01:32.174196 dockerd[1610]: time="2026-04-16T01:01:32.168949088Z" level=info msg="Loading containers: start." Apr 16 01:01:32.992621 kernel: Initializing XFRM netlink socket Apr 16 01:01:33.268337 systemd-networkd[1376]: docker0: Link UP Apr 16 01:01:33.430990 dockerd[1610]: time="2026-04-16T01:01:33.430146575Z" level=info msg="Loading containers: done." Apr 16 01:01:33.510117 dockerd[1610]: time="2026-04-16T01:01:33.508938090Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 01:01:33.510117 dockerd[1610]: time="2026-04-16T01:01:33.510010674Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 01:01:33.510734 dockerd[1610]: time="2026-04-16T01:01:33.510674489Z" level=info msg="Daemon has completed initialization" Apr 16 01:01:34.233521 dockerd[1610]: time="2026-04-16T01:01:34.225958860Z" level=info msg="API listen on /run/docker.sock" Apr 16 01:01:34.239367 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 01:01:36.291408 containerd[1456]: time="2026-04-16T01:01:36.291058459Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 01:01:37.433192 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 01:01:37.446286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:01:38.029149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377516309.mount: Deactivated successfully. Apr 16 01:01:38.943947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:01:39.044366 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:01:39.244793 kubelet[1777]: E0416 01:01:39.244025 1777 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:01:39.252807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:01:39.252975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:01:49.515341 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 01:01:49.622380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:01:53.553454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:01:53.704147 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:01:54.289579 kubelet[1840]: E0416 01:01:54.285202 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:01:54.298909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:01:54.299359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:01:54.304272 systemd[1]: kubelet.service: Consumed 2.465s CPU time. Apr 16 01:01:56.634621 containerd[1456]: time="2026-04-16T01:01:56.631593275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:01:56.643177 containerd[1456]: time="2026-04-16T01:01:56.643040471Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 16 01:01:56.658937 containerd[1456]: time="2026-04-16T01:01:56.658524961Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:01:56.730865 containerd[1456]: time="2026-04-16T01:01:56.730400417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:01:56.733592 containerd[1456]: time="2026-04-16T01:01:56.733494329Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 20.442339957s" Apr 16 01:01:56.733592 containerd[1456]: time="2026-04-16T01:01:56.733586951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 16 01:01:56.743508 containerd[1456]: time="2026-04-16T01:01:56.741452600Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 01:02:02.244774 containerd[1456]: time="2026-04-16T01:02:02.243949565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:02.248650 containerd[1456]: time="2026-04-16T01:02:02.247466371Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 16 01:02:02.248938 containerd[1456]: time="2026-04-16T01:02:02.248820033Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:02.453954 containerd[1456]: time="2026-04-16T01:02:02.452362651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:02.491717 containerd[1456]: time="2026-04-16T01:02:02.490562691Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 5.747478379s" Apr 16 01:02:02.491717 containerd[1456]: time="2026-04-16T01:02:02.491487273Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 16 01:02:02.491717 containerd[1456]: time="2026-04-16T01:02:02.494262118Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 01:02:04.415021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 01:02:04.443565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:02:05.366564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:02:05.426079 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:02:05.599889 kubelet[1864]: E0416 01:02:05.599241 1864 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:02:05.607056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:02:05.607294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:02:05.918897 update_engine[1445]: I20260416 01:02:05.913174 1445 update_attempter.cc:509] Updating boot flags... Apr 16 01:02:06.266685 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1880) Apr 16 01:02:06.452550 containerd[1456]: time="2026-04-16T01:02:06.452471064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:06.459917 containerd[1456]: time="2026-04-16T01:02:06.459722823Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 16 01:02:06.464898 containerd[1456]: time="2026-04-16T01:02:06.462918853Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:06.492169 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1879) Apr 16 01:02:06.508253 containerd[1456]: time="2026-04-16T01:02:06.505985335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:06.510336 containerd[1456]: time="2026-04-16T01:02:06.510255061Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 4.015906384s" Apr 16 01:02:06.510336 containerd[1456]: time="2026-04-16T01:02:06.510331695Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 16 01:02:06.545774 containerd[1456]: time="2026-04-16T01:02:06.541652068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 01:02:06.703237 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1879) Apr 16 01:02:10.294382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489855953.mount: Deactivated successfully. Apr 16 01:02:12.563924 containerd[1456]: time="2026-04-16T01:02:12.561029277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:12.571145 containerd[1456]: time="2026-04-16T01:02:12.571045209Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 16 01:02:12.572703 containerd[1456]: time="2026-04-16T01:02:12.572657210Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:12.588816 containerd[1456]: time="2026-04-16T01:02:12.588298375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:12.590041 containerd[1456]: time="2026-04-16T01:02:12.588688354Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 6.034909712s" Apr 16 01:02:12.590041 containerd[1456]: time="2026-04-16T01:02:12.589378481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 16 01:02:12.594898 containerd[1456]: time="2026-04-16T01:02:12.593744632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 01:02:15.199245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792132641.mount: Deactivated successfully. Apr 16 01:02:15.808944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 01:02:15.932860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:02:18.089079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:02:18.256853 (kubelet)[1916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:02:19.153937 kubelet[1916]: E0416 01:02:19.153291 1916 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:02:19.203786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:02:19.204158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:02:19.222880 systemd[1]: kubelet.service: Consumed 1.590s CPU time. Apr 16 01:02:29.559617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 01:02:29.599240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:02:32.537630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:02:32.652034 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:02:33.476062 kubelet[1973]: E0416 01:02:33.475507 1973 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:02:33.480167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:02:33.481973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:02:33.486451 systemd[1]: kubelet.service: Consumed 1.151s CPU time. Apr 16 01:02:36.475826 containerd[1456]: time="2026-04-16T01:02:36.469637310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:36.501533 containerd[1456]: time="2026-04-16T01:02:36.501315082Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 16 01:02:36.503757 containerd[1456]: time="2026-04-16T01:02:36.503559981Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:36.536634 containerd[1456]: time="2026-04-16T01:02:36.535594988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:36.590623 containerd[1456]: time="2026-04-16T01:02:36.589427804Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 23.994973526s" Apr 16 01:02:36.590623 containerd[1456]: time="2026-04-16T01:02:36.590210506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 16 01:02:36.622755 containerd[1456]: time="2026-04-16T01:02:36.619428633Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 01:02:38.921054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977147920.mount: Deactivated successfully. Apr 16 01:02:38.984569 containerd[1456]: time="2026-04-16T01:02:38.982566956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:38.984569 containerd[1456]: time="2026-04-16T01:02:38.983960147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 16 01:02:38.992481 containerd[1456]: time="2026-04-16T01:02:38.991810476Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:39.000568 containerd[1456]: time="2026-04-16T01:02:39.000452950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:39.001455 containerd[1456]: time="2026-04-16T01:02:39.001365521Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.381761691s" Apr 16 01:02:39.001455 containerd[1456]: time="2026-04-16T01:02:39.001430012Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 01:02:39.006852 containerd[1456]: time="2026-04-16T01:02:39.006456129Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 01:02:41.730854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071186391.mount: Deactivated successfully. Apr 16 01:02:43.654798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 16 01:02:43.712173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:02:44.921208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:02:44.992507 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:02:45.279243 kubelet[2016]: E0416 01:02:45.275924 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:02:45.298568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:02:45.298864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:02:54.716543 containerd[1456]: time="2026-04-16T01:02:54.715625759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:54.722341 containerd[1456]: time="2026-04-16T01:02:54.722240543Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 16 01:02:54.724053 containerd[1456]: time="2026-04-16T01:02:54.723914944Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:54.729391 containerd[1456]: time="2026-04-16T01:02:54.729299578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:02:54.754775 containerd[1456]: time="2026-04-16T01:02:54.754164416Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 15.747542484s" Apr 16 01:02:54.754775 containerd[1456]: time="2026-04-16T01:02:54.754463952Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 16 01:02:55.440655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 16 01:02:55.475862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:02:56.964511 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:02:56.965942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:02:57.499652 kubelet[2099]: E0416 01:02:57.498303 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:02:57.512455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:02:57.548259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:03:07.734244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 16 01:03:07.795728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:03:10.690823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:03:11.048797 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:03:11.903318 kubelet[2116]: E0416 01:03:11.902367 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:03:11.955884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:03:11.958016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:03:11.958934 systemd[1]: kubelet.service: Consumed 1.407s CPU time. Apr 16 01:03:18.691798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:03:18.696074 systemd[1]: kubelet.service: Consumed 1.407s CPU time. Apr 16 01:03:18.766305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:03:19.290063 systemd[1]: Reloading requested from client PID 2132 ('systemctl') (unit session-5.scope)... Apr 16 01:03:19.290202 systemd[1]: Reloading... Apr 16 01:03:19.806587 zram_generator::config[2171]: No configuration found. Apr 16 01:03:22.231384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:03:23.081000 systemd[1]: Reloading finished in 3790 ms. Apr 16 01:03:23.512491 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:03:23.532559 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 01:03:23.532956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:03:23.629772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:03:29.147704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:03:29.665537 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 01:03:30.290504 kubelet[2221]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 01:03:30.290504 kubelet[2221]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:03:30.303145 kubelet[2221]: I0416 01:03:30.290352 2221 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 01:03:36.034549 kubelet[2221]: I0416 01:03:36.029671 2221 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 01:03:36.034549 kubelet[2221]: I0416 01:03:36.032532 2221 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 01:03:36.058389 kubelet[2221]: I0416 01:03:36.037006 2221 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 01:03:36.058389 kubelet[2221]: I0416 01:03:36.037379 2221 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 01:03:36.059197 kubelet[2221]: I0416 01:03:36.058968 2221 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 01:03:36.391586 kubelet[2221]: E0416 01:03:36.386696 2221 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 01:03:36.431779 kubelet[2221]: I0416 01:03:36.430774 2221 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 01:03:36.572541 kubelet[2221]: E0416 01:03:36.570967 2221 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 01:03:36.572541 kubelet[2221]: I0416 01:03:36.571274 2221 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 01:03:37.015577 kubelet[2221]: I0416 01:03:37.014956 2221 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 01:03:37.171752 kubelet[2221]: I0416 01:03:37.152286 2221 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 01:03:37.196162 kubelet[2221]: I0416 01:03:37.153044 2221 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 01:03:37.196162 kubelet[2221]: I0416 01:03:37.193066 2221 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 01:03:37.196162 kubelet[2221]: I0416 01:03:37.193434 2221 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 01:03:37.203044 kubelet[2221]: I0416 01:03:37.196832 2221 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 01:03:37.241482 kubelet[2221]: I0416 01:03:37.222279 2221 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:03:37.241482 kubelet[2221]: I0416 01:03:37.223601 2221 kubelet.go:475] "Attempting to sync node with API server" Apr 16 01:03:37.244802 kubelet[2221]: I0416 01:03:37.243760 2221 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 01:03:37.244802 kubelet[2221]: I0416 01:03:37.243892 2221 kubelet.go:387] "Adding apiserver pod source" Apr 16 01:03:37.244802 kubelet[2221]: I0416 01:03:37.244242 2221 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 01:03:37.276459 kubelet[2221]: E0416 01:03:37.256803 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:03:37.345284 kubelet[2221]: I0416 01:03:37.344670 2221 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 01:03:37.346558 kubelet[2221]: I0416 01:03:37.346214 2221 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 01:03:37.346558 kubelet[2221]: I0416 01:03:37.346254 2221 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 01:03:37.368065 kubelet[2221]: W0416 01:03:37.353667 2221 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 01:03:37.386478 kubelet[2221]: E0416 01:03:37.379880 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:03:37.427240 kubelet[2221]: I0416 01:03:37.427066 2221 server.go:1262] "Started kubelet" Apr 16 01:03:37.429712 kubelet[2221]: I0416 01:03:37.429644 2221 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 01:03:37.442151 kubelet[2221]: I0416 01:03:37.440880 2221 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 01:03:37.442151 kubelet[2221]: I0416 01:03:37.441051 2221 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 01:03:37.442151 kubelet[2221]: I0416 01:03:37.441999 2221 server.go:310] "Adding debug handlers to kubelet server" Apr 16 01:03:37.499235 kubelet[2221]: I0416 01:03:37.491258 2221 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 01:03:37.525159 kubelet[2221]: I0416 01:03:37.525066 2221 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 01:03:37.566578 kubelet[2221]: E0416 01:03:37.507153 2221 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b0ba57c01fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 01:03:37.426853826 +0000 UTC m=+7.719874704,LastTimestamp:2026-04-16 01:03:37.426853826 +0000 UTC m=+7.719874704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 01:03:37.566578 kubelet[2221]: I0416 01:03:37.555777 2221 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 01:03:37.566578 kubelet[2221]: I0416 01:03:37.556508 2221 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 01:03:37.566578 kubelet[2221]: I0416 01:03:37.557572 2221 reconciler.go:29] "Reconciler: start to sync state" Apr 16 01:03:37.566578 kubelet[2221]: E0416 01:03:37.559202 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:03:37.566578 kubelet[2221]: E0416 01:03:37.559580 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:03:37.566578 kubelet[2221]: I0416 01:03:37.559981 2221 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 01:03:37.568489 kubelet[2221]: E0416 01:03:37.560078 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Apr 16 01:03:37.568489 kubelet[2221]: E0416 01:03:37.565188 2221 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 01:03:37.572042 kubelet[2221]: I0416 01:03:37.571967 2221 factory.go:223] Registration of the systemd container factory successfully Apr 16 01:03:37.572524 kubelet[2221]: I0416 01:03:37.572180 2221 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 01:03:37.599501 kubelet[2221]: I0416 01:03:37.599275 2221 factory.go:223] Registration of the containerd container factory successfully Apr 16 01:03:37.678804 kubelet[2221]: E0416 01:03:37.670223 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:03:37.770061 kubelet[2221]: E0416 01:03:37.769564 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Apr 16 01:03:37.772421 kubelet[2221]: E0416 01:03:37.770433 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:03:37.879168 kubelet[2221]: E0416 01:03:37.871881 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:03:37.879168 kubelet[2221]: I0416 01:03:37.872140 2221 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 01:03:37.879168 kubelet[2221]: I0416 01:03:37.872161 2221 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 01:03:37.879168 kubelet[2221]: I0416 01:03:37.872357 2221 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:03:37.910983 kubelet[2221]: I0416 01:03:37.910560 2221 policy_none.go:49] "None policy: Start" Apr 16 01:03:37.910983 kubelet[2221]: I0416 01:03:37.910815 2221 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 01:03:37.910983 kubelet[2221]: I0416 01:03:37.910846 2221 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 01:03:37.942083 kubelet[2221]: I0416 01:03:37.941889 2221 policy_none.go:47] "Start" Apr 16 01:03:37.996569 kubelet[2221]: E0416 01:03:37.995641 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:03:38.022926 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 01:03:38.056524 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 01:03:38.068055 kubelet[2221]: I0416 01:03:38.067937 2221 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 01:03:38.073916 kubelet[2221]: I0416 01:03:38.072968 2221 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 01:03:38.073916 kubelet[2221]: I0416 01:03:38.073042 2221 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 01:03:38.073916 kubelet[2221]: I0416 01:03:38.073176 2221 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 01:03:38.073916 kubelet[2221]: E0416 01:03:38.073272 2221 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 01:03:38.078176 kubelet[2221]: E0416 01:03:38.077704 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:03:38.097494 kubelet[2221]: E0416 01:03:38.097216 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:03:38.097933 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 01:03:38.116006 kubelet[2221]: E0416 01:03:38.110279 2221 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 01:03:38.116006 kubelet[2221]: I0416 01:03:38.112449 2221 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 01:03:38.116006 kubelet[2221]: I0416 01:03:38.112467 2221 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 01:03:38.116006 kubelet[2221]: I0416 01:03:38.113142 2221 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 01:03:38.117681 kubelet[2221]: E0416 01:03:38.117581 2221 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 01:03:38.117681 kubelet[2221]: E0416 01:03:38.117656 2221 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:03:38.182447 kubelet[2221]: E0416 01:03:38.179572 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Apr 16 01:03:38.256666 kubelet[2221]: I0416 01:03:38.255795 2221 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:03:38.256666 kubelet[2221]: E0416 01:03:38.257017 2221 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Apr 16 01:03:38.289474 kubelet[2221]: I0416 01:03:38.289349 2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:03:38.300036 kubelet[2221]: I0416 01:03:38.296824 2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:03:38.300036 kubelet[2221]: I0416 01:03:38.296885 2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:03:38.300036 kubelet[2221]: I0416 01:03:38.297169 2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a1193a8d9d8acec200f820e661eff93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a1193a8d9d8acec200f820e661eff93\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:03:38.300036 kubelet[2221]: I0416 01:03:38.297208 2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a1193a8d9d8acec200f820e661eff93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a1193a8d9d8acec200f820e661eff93\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:03:38.300036 kubelet[2221]: I0416 01:03:38.297243 2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a1193a8d9d8acec200f820e661eff93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a1193a8d9d8acec200f820e661eff93\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:03:38.303886 kubelet[2221]: I0416 01:03:38.297266 2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:03:38.303886 kubelet[2221]: I0416 01:03:38.297285 2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:03:38.316896 kubelet[2221]: E0416 01:03:38.313439 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:03:38.365629 systemd[1]: Created slice kubepods-burstable-pod6a1193a8d9d8acec200f820e661eff93.slice - libcontainer container kubepods-burstable-pod6a1193a8d9d8acec200f820e661eff93.slice. Apr 16 01:03:38.401926 kubelet[2221]: I0416 01:03:38.399971 2221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 01:03:38.447570 kubelet[2221]: E0416 01:03:38.447283 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:38.472670 kubelet[2221]: I0416 01:03:38.471647 2221 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:03:38.472670 kubelet[2221]: E0416 01:03:38.472358 2221 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Apr 16 01:03:38.475077 kubelet[2221]: E0416 01:03:38.474983 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:38.531396 containerd[1456]: time="2026-04-16T01:03:38.512786062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a1193a8d9d8acec200f820e661eff93,Namespace:kube-system,Attempt:0,}" Apr 16 01:03:38.565454 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 16 01:03:38.905019 kubelet[2221]: E0416 01:03:38.904529 2221 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 01:03:38.931457 kubelet[2221]: I0416 01:03:38.906534 2221 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:03:38.943592 kubelet[2221]: E0416 01:03:38.940870 2221 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Apr 16 01:03:38.943592 kubelet[2221]: E0416 01:03:38.943434 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:03:38.994348 kubelet[2221]: E0416 01:03:38.986671 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="1.6s" Apr 16 01:03:38.997392 kubelet[2221]: E0416 01:03:38.995923 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:39.010921 kubelet[2221]: E0416 01:03:39.009773 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:39.038798 containerd[1456]: time="2026-04-16T01:03:39.034022647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 16 01:03:39.073181 kubelet[2221]: E0416 01:03:39.069061 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:03:39.227724 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 16 01:03:39.241046 kubelet[2221]: E0416 01:03:39.240707 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:03:39.343838 kubelet[2221]: E0416 01:03:39.335898 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:39.427068 kubelet[2221]: E0416 01:03:39.426396 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:39.468858 containerd[1456]: time="2026-04-16T01:03:39.467179111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 16 01:03:39.809938 kubelet[2221]: I0416 01:03:39.803853 2221 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:03:39.809938 kubelet[2221]: E0416 01:03:39.812418 2221 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Apr 16 01:03:40.246228 kubelet[2221]: E0416 01:03:40.245040 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:03:40.550632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567599778.mount: Deactivated successfully. Apr 16 01:03:40.602696 kubelet[2221]: E0416 01:03:40.599766 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="3.2s" Apr 16 01:03:40.778060 containerd[1456]: time="2026-04-16T01:03:40.777320745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:03:40.809035 containerd[1456]: time="2026-04-16T01:03:40.797612275Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:03:40.817278 containerd[1456]: time="2026-04-16T01:03:40.816901983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 16 01:03:40.840849 containerd[1456]: time="2026-04-16T01:03:40.840125285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 01:03:40.864460 containerd[1456]: time="2026-04-16T01:03:40.862702445Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:03:40.876650 containerd[1456]: time="2026-04-16T01:03:40.875528641Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:03:40.890006 containerd[1456]: time="2026-04-16T01:03:40.888570926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 01:03:40.919932 containerd[1456]: time="2026-04-16T01:03:40.919198180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:03:40.938932 containerd[1456]: time="2026-04-16T01:03:40.937204815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.902996547s" Apr 16 01:03:40.941358 containerd[1456]: time="2026-04-16T01:03:40.937534774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.424304856s" Apr 16 01:03:40.956372 containerd[1456]: time="2026-04-16T01:03:40.942637065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.475036932s" Apr 16 01:03:41.108161 kubelet[2221]: E0416 01:03:41.099045 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:03:41.428161 kubelet[2221]: I0416 01:03:41.426999 2221 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:03:41.437874 kubelet[2221]: E0416 01:03:41.437383 2221 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Apr 16 01:03:41.467071 kubelet[2221]: E0416 01:03:41.466606 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:03:42.012735 kubelet[2221]: E0416 01:03:41.999938 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:03:42.493668 containerd[1456]: time="2026-04-16T01:03:42.489924491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:03:42.493668 containerd[1456]: time="2026-04-16T01:03:42.490139018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:03:42.493668 containerd[1456]: time="2026-04-16T01:03:42.490164171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:03:42.493668 containerd[1456]: time="2026-04-16T01:03:42.490437795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:03:42.509809 containerd[1456]: time="2026-04-16T01:03:42.489030882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:03:42.509809 containerd[1456]: time="2026-04-16T01:03:42.505724142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:03:42.509809 containerd[1456]: time="2026-04-16T01:03:42.505748632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:03:42.509809 containerd[1456]: time="2026-04-16T01:03:42.505941683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:03:42.510148 containerd[1456]: time="2026-04-16T01:03:42.498750721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:03:42.510148 containerd[1456]: time="2026-04-16T01:03:42.498850514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:03:42.510148 containerd[1456]: time="2026-04-16T01:03:42.498876898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:03:42.510148 containerd[1456]: time="2026-04-16T01:03:42.505595208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:03:43.064088 systemd[1]: Started cri-containerd-14e23f83ec450f9b76cf037fc734167287febb33a708a0fa20244618178d5b06.scope - libcontainer container 14e23f83ec450f9b76cf037fc734167287febb33a708a0fa20244618178d5b06. Apr 16 01:03:43.217034 systemd[1]: Started cri-containerd-198d814eec32e4abd81ef4c4bb921ba43f8be02293d94919d490aa4752fcf8be.scope - libcontainer container 198d814eec32e4abd81ef4c4bb921ba43f8be02293d94919d490aa4752fcf8be. Apr 16 01:03:43.311867 systemd[1]: Started cri-containerd-a72c60ff143983def3381cd4cab47550e112b5fee1aecd360ec9bad89a6fa0d4.scope - libcontainer container a72c60ff143983def3381cd4cab47550e112b5fee1aecd360ec9bad89a6fa0d4. Apr 16 01:03:43.496162 kubelet[2221]: E0416 01:03:43.469565 2221 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 01:03:43.823130 kubelet[2221]: E0416 01:03:43.804865 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="6.4s" Apr 16 01:03:44.449301 containerd[1456]: time="2026-04-16T01:03:44.445747843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"14e23f83ec450f9b76cf037fc734167287febb33a708a0fa20244618178d5b06\"" Apr 16 01:03:44.495695 kubelet[2221]: E0416 01:03:44.480338 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:44.592560 containerd[1456]: time="2026-04-16T01:03:44.573617564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a1193a8d9d8acec200f820e661eff93,Namespace:kube-system,Attempt:0,} returns sandbox id \"a72c60ff143983def3381cd4cab47550e112b5fee1aecd360ec9bad89a6fa0d4\"" Apr 16 01:03:44.649138 containerd[1456]: time="2026-04-16T01:03:44.618940669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"198d814eec32e4abd81ef4c4bb921ba43f8be02293d94919d490aa4752fcf8be\"" Apr 16 01:03:44.682396 kubelet[2221]: E0416 01:03:44.604662 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:44.687268 kubelet[2221]: I0416 01:03:44.685507 2221 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:03:44.687268 kubelet[2221]: E0416 01:03:44.685863 2221 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Apr 16 01:03:44.687268 kubelet[2221]: E0416 01:03:44.685876 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:44.711338 containerd[1456]: time="2026-04-16T01:03:44.696934260Z" level=info msg="CreateContainer within sandbox \"14e23f83ec450f9b76cf037fc734167287febb33a708a0fa20244618178d5b06\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 01:03:44.711338 containerd[1456]: time="2026-04-16T01:03:44.709867956Z" level=info msg="CreateContainer within sandbox \"a72c60ff143983def3381cd4cab47550e112b5fee1aecd360ec9bad89a6fa0d4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 01:03:44.750282 containerd[1456]: time="2026-04-16T01:03:44.749665378Z" level=info msg="CreateContainer within sandbox \"198d814eec32e4abd81ef4c4bb921ba43f8be02293d94919d490aa4752fcf8be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 01:03:44.835153 kubelet[2221]: E0416 01:03:44.834963 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:03:44.904282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044812766.mount: Deactivated successfully. Apr 16 01:03:44.988079 containerd[1456]: time="2026-04-16T01:03:44.984487907Z" level=info msg="CreateContainer within sandbox \"14e23f83ec450f9b76cf037fc734167287febb33a708a0fa20244618178d5b06\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d46c98dba66e6e6acba21a1ef11cfb38bc001cfda45ea4279356711723466ada\"" Apr 16 01:03:45.005657 containerd[1456]: time="2026-04-16T01:03:45.002948142Z" level=info msg="CreateContainer within sandbox \"a72c60ff143983def3381cd4cab47550e112b5fee1aecd360ec9bad89a6fa0d4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"03e99e9f22149400facea30947c899be75cb8415e84b302f44c318bdc504b165\"" Apr 16 01:03:45.005657 containerd[1456]: time="2026-04-16T01:03:45.003381539Z" level=info msg="StartContainer for \"d46c98dba66e6e6acba21a1ef11cfb38bc001cfda45ea4279356711723466ada\"" Apr 16 01:03:45.021467 containerd[1456]: time="2026-04-16T01:03:45.020997751Z" level=info msg="StartContainer for \"03e99e9f22149400facea30947c899be75cb8415e84b302f44c318bdc504b165\"" Apr 16 01:03:45.100645 containerd[1456]: time="2026-04-16T01:03:45.100247351Z" level=info msg="CreateContainer within sandbox \"198d814eec32e4abd81ef4c4bb921ba43f8be02293d94919d490aa4752fcf8be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6\"" Apr 16 01:03:45.114897 containerd[1456]: time="2026-04-16T01:03:45.114316965Z" level=info msg="StartContainer for \"b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6\"" Apr 16 01:03:45.412655 systemd[1]: Started cri-containerd-03e99e9f22149400facea30947c899be75cb8415e84b302f44c318bdc504b165.scope - libcontainer container 03e99e9f22149400facea30947c899be75cb8415e84b302f44c318bdc504b165. Apr 16 01:03:45.492674 systemd[1]: Started cri-containerd-d46c98dba66e6e6acba21a1ef11cfb38bc001cfda45ea4279356711723466ada.scope - libcontainer container d46c98dba66e6e6acba21a1ef11cfb38bc001cfda45ea4279356711723466ada. Apr 16 01:03:45.711695 systemd[1]: Started cri-containerd-b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6.scope - libcontainer container b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6. Apr 16 01:03:45.805275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640718191.mount: Deactivated successfully. Apr 16 01:03:45.945417 kubelet[2221]: E0416 01:03:45.944600 2221 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b0ba57c01fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 01:03:37.426853826 +0000 UTC m=+7.719874704,LastTimestamp:2026-04-16 01:03:37.426853826 +0000 UTC m=+7.719874704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 01:03:46.592045 containerd[1456]: time="2026-04-16T01:03:46.591391780Z" level=info msg="StartContainer for \"d46c98dba66e6e6acba21a1ef11cfb38bc001cfda45ea4279356711723466ada\" returns successfully" Apr 16 01:03:46.592045 containerd[1456]: time="2026-04-16T01:03:46.591660789Z" level=info msg="StartContainer for \"03e99e9f22149400facea30947c899be75cb8415e84b302f44c318bdc504b165\" returns successfully" Apr 16 01:03:46.592045 containerd[1456]: time="2026-04-16T01:03:46.591702966Z" level=info msg="StartContainer for \"b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6\" returns successfully" Apr 16 01:03:46.906053 kubelet[2221]: E0416 01:03:46.885872 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:03:47.319406 kubelet[2221]: E0416 01:03:47.166917 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:03:47.423952 kubelet[2221]: E0416 01:03:47.352883 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:03:47.429419 kubelet[2221]: E0416 01:03:47.428470 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:47.429419 kubelet[2221]: E0416 01:03:47.428802 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:47.431474 kubelet[2221]: E0416 01:03:47.430333 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:47.431474 kubelet[2221]: E0416 01:03:47.430509 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:47.520195 kubelet[2221]: E0416 01:03:47.519736 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:47.522945 kubelet[2221]: E0416 01:03:47.522253 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:48.129759 kubelet[2221]: E0416 01:03:48.129263 2221 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:03:49.033936 kubelet[2221]: E0416 01:03:49.002196 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:49.033936 kubelet[2221]: E0416 01:03:49.002659 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:49.033936 kubelet[2221]: E0416 01:03:49.003049 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:49.033936 kubelet[2221]: E0416 01:03:49.030362 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:49.033936 kubelet[2221]: E0416 01:03:49.031046 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:49.114188 kubelet[2221]: E0416 01:03:49.102570 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:49.562139 kubelet[2221]: E0416 01:03:49.553740 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:49.562139 kubelet[2221]: E0416 01:03:49.554714 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:49.562139 kubelet[2221]: E0416 01:03:49.554904 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:49.562139 kubelet[2221]: E0416 01:03:49.562359 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:49.617048 kubelet[2221]: E0416 01:03:49.562691 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:49.617048 kubelet[2221]: E0416 01:03:49.575845 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:50.726387 kubelet[2221]: E0416 01:03:50.723005 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:50.726387 kubelet[2221]: E0416 01:03:50.723685 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:51.354622 kubelet[2221]: I0416 01:03:51.354269 2221 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:03:57.094783 kubelet[2221]: E0416 01:03:57.094009 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:57.094783 kubelet[2221]: E0416 01:03:57.095182 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:03:58.170931 kubelet[2221]: E0416 01:03:58.170325 2221 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:03:58.582684 kubelet[2221]: E0416 01:03:58.581547 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:03:58.582684 kubelet[2221]: E0416 01:03:58.581821 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:00.239242 kubelet[2221]: E0416 01:04:00.238550 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 01:04:01.450721 kubelet[2221]: E0416 01:04:01.450133 2221 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 01:04:02.214964 kubelet[2221]: E0416 01:04:02.212339 2221 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 01:04:03.986375 kubelet[2221]: E0416 01:04:03.984782 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:04:05.591743 kubelet[2221]: E0416 01:04:05.569720 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:04:06.032515 kubelet[2221]: E0416 01:04:06.026963 2221 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b0ba57c01fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 01:03:37.426853826 +0000 UTC m=+7.719874704,LastTimestamp:2026-04-16 01:03:37.426853826 +0000 UTC m=+7.719874704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 01:04:07.532417 kubelet[2221]: E0416 01:04:07.531788 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:04:08.723693 kubelet[2221]: E0416 01:04:08.719389 2221 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:04:09.412198 kubelet[2221]: I0416 01:04:09.406502 2221 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:04:09.604262 kubelet[2221]: E0416 01:04:09.603666 2221 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:04:09.617785 kubelet[2221]: E0416 01:04:09.617623 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:09.692748 kubelet[2221]: E0416 01:04:09.686786 2221 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:04:11.325514 kubelet[2221]: I0416 01:04:11.324980 2221 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 01:04:11.325514 kubelet[2221]: E0416 01:04:11.325178 2221 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 01:04:11.433730 kubelet[2221]: E0416 01:04:11.432468 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="7s" Apr 16 01:04:11.463569 kubelet[2221]: E0416 01:04:11.461665 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:11.567948 kubelet[2221]: E0416 01:04:11.563485 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:11.679612 kubelet[2221]: E0416 01:04:11.678532 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:11.784747 kubelet[2221]: E0416 01:04:11.784173 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:11.912477 kubelet[2221]: E0416 01:04:11.911909 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.033231 kubelet[2221]: E0416 01:04:12.031367 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.134935 kubelet[2221]: E0416 01:04:12.133710 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.254460 kubelet[2221]: E0416 01:04:12.253596 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.360296 kubelet[2221]: E0416 01:04:12.356720 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.460367 kubelet[2221]: E0416 01:04:12.459655 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.563052 kubelet[2221]: E0416 01:04:12.561594 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.663266 kubelet[2221]: E0416 01:04:12.663069 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.773387 kubelet[2221]: E0416 01:04:12.766422 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.889955 kubelet[2221]: E0416 01:04:12.883182 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:12.997567 kubelet[2221]: E0416 01:04:12.995713 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:13.107657 kubelet[2221]: E0416 01:04:13.105470 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:13.216191 kubelet[2221]: E0416 01:04:13.215586 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:13.321235 kubelet[2221]: E0416 01:04:13.319622 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:13.435089 kubelet[2221]: E0416 01:04:13.433684 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:13.538647 kubelet[2221]: E0416 01:04:13.537848 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:13.641168 kubelet[2221]: E0416 01:04:13.639240 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:13.750350 kubelet[2221]: E0416 01:04:13.749817 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:13.862417 kubelet[2221]: E0416 01:04:13.861415 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:13.975570 kubelet[2221]: E0416 01:04:13.975010 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:14.086970 kubelet[2221]: E0416 01:04:14.085975 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:14.204695 kubelet[2221]: E0416 01:04:14.202761 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:14.317937 kubelet[2221]: E0416 01:04:14.313133 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:14.416607 kubelet[2221]: E0416 01:04:14.415921 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:14.520442 kubelet[2221]: E0416 01:04:14.519875 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:14.650196 kubelet[2221]: E0416 01:04:14.626544 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:14.799810 kubelet[2221]: E0416 01:04:14.796032 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:14.907738 kubelet[2221]: E0416 01:04:14.897353 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:15.002022 kubelet[2221]: E0416 01:04:15.001355 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:15.110690 kubelet[2221]: E0416 01:04:15.103595 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:15.217675 kubelet[2221]: E0416 01:04:15.209513 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:15.324662 kubelet[2221]: E0416 01:04:15.323288 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:15.437996 kubelet[2221]: E0416 01:04:15.436704 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:15.661197 kubelet[2221]: E0416 01:04:15.661015 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:15.766070 kubelet[2221]: E0416 01:04:15.765383 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:15.872359 kubelet[2221]: E0416 01:04:15.871427 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:15.998399 kubelet[2221]: E0416 01:04:15.991267 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:16.098278 kubelet[2221]: E0416 01:04:16.095280 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:16.197079 kubelet[2221]: E0416 01:04:16.196571 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:16.301354 kubelet[2221]: E0416 01:04:16.299884 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:16.402516 kubelet[2221]: E0416 01:04:16.401797 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:16.506383 kubelet[2221]: E0416 01:04:16.504577 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:16.649031 kubelet[2221]: E0416 01:04:16.615013 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:16.720654 kubelet[2221]: E0416 01:04:16.719675 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:16.839609 kubelet[2221]: E0416 01:04:16.839067 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:16.941435 kubelet[2221]: E0416 01:04:16.940363 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:17.047701 kubelet[2221]: E0416 01:04:17.044458 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:17.147970 kubelet[2221]: E0416 01:04:17.147279 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:17.263467 kubelet[2221]: E0416 01:04:17.252938 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:17.374652 kubelet[2221]: E0416 01:04:17.370244 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:17.474544 kubelet[2221]: E0416 01:04:17.472493 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:17.593648 kubelet[2221]: E0416 01:04:17.575169 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:17.677367 kubelet[2221]: E0416 01:04:17.676785 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:17.808904 kubelet[2221]: E0416 01:04:17.803835 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:17.922337 kubelet[2221]: E0416 01:04:17.921069 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:18.033314 kubelet[2221]: E0416 01:04:18.028236 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:18.135574 kubelet[2221]: E0416 01:04:18.134995 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:18.240640 kubelet[2221]: E0416 01:04:18.236772 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:18.341506 kubelet[2221]: E0416 01:04:18.339020 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:18.444287 kubelet[2221]: E0416 01:04:18.443674 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:18.558664 kubelet[2221]: E0416 01:04:18.547584 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:18.703186 kubelet[2221]: E0416 01:04:18.696619 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:18.735318 kubelet[2221]: E0416 01:04:18.733284 2221 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:04:18.818178 kubelet[2221]: E0416 01:04:18.808281 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:18.922249 kubelet[2221]: E0416 01:04:18.921451 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.027462 kubelet[2221]: E0416 01:04:19.027015 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.154647 kubelet[2221]: E0416 01:04:19.127536 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.239959 kubelet[2221]: E0416 01:04:19.239421 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.370334 kubelet[2221]: E0416 01:04:19.349243 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.521955 kubelet[2221]: E0416 01:04:19.458953 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.564353 kubelet[2221]: E0416 01:04:19.563802 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.685934 kubelet[2221]: E0416 01:04:19.664557 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.770340 kubelet[2221]: E0416 01:04:19.765521 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.879194 kubelet[2221]: E0416 01:04:19.866307 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:19.968083 kubelet[2221]: E0416 01:04:19.967490 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:20.091518 kubelet[2221]: E0416 01:04:20.087470 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:20.226636 kubelet[2221]: E0416 01:04:20.226247 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:20.344596 kubelet[2221]: E0416 01:04:20.335991 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:20.439452 kubelet[2221]: E0416 01:04:20.438904 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:20.543593 kubelet[2221]: E0416 01:04:20.540155 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:20.651418 kubelet[2221]: E0416 01:04:20.650997 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:20.763278 kubelet[2221]: E0416 01:04:20.751412 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:20.901415 kubelet[2221]: E0416 01:04:20.891291 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:21.013846 kubelet[2221]: E0416 01:04:21.001050 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:21.132204 kubelet[2221]: E0416 01:04:21.128444 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:21.272425 kubelet[2221]: E0416 01:04:21.259022 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:21.373459 kubelet[2221]: E0416 01:04:21.372913 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:21.476542 kubelet[2221]: E0416 01:04:21.474877 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:21.599578 kubelet[2221]: E0416 01:04:21.577765 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:21.658249 kubelet[2221]: E0416 01:04:21.655253 2221 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 01:04:21.940200 kubelet[2221]: E0416 01:04:21.940004 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:22.088807 kubelet[2221]: E0416 01:04:22.088170 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:22.208804 kubelet[2221]: E0416 01:04:22.205466 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:22.335437 kubelet[2221]: E0416 01:04:22.315154 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:22.427488 kubelet[2221]: E0416 01:04:22.426779 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:22.566200 kubelet[2221]: E0416 01:04:22.543457 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:22.653152 kubelet[2221]: E0416 01:04:22.652574 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:22.784556 kubelet[2221]: E0416 01:04:22.759798 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:22.784556 kubelet[2221]: E0416 01:04:22.865461 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:22.994284 kubelet[2221]: E0416 01:04:22.977639 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:23.080587 kubelet[2221]: E0416 01:04:23.079349 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:23.226356 kubelet[2221]: E0416 01:04:23.225237 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:23.352606 kubelet[2221]: E0416 01:04:23.339515 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:23.447208 kubelet[2221]: E0416 01:04:23.445511 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:23.560033 kubelet[2221]: E0416 01:04:23.557395 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:23.679466 kubelet[2221]: E0416 01:04:23.662616 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:23.771595 kubelet[2221]: E0416 01:04:23.768674 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:23.884940 kubelet[2221]: E0416 01:04:23.882347 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:23.995714 kubelet[2221]: E0416 01:04:23.993820 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:24.096219 kubelet[2221]: E0416 01:04:24.095342 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:24.232298 kubelet[2221]: E0416 01:04:24.231852 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:24.372764 kubelet[2221]: E0416 01:04:24.360548 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:24.467123 kubelet[2221]: E0416 01:04:24.464089 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:24.625998 kubelet[2221]: E0416 01:04:24.622220 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:24.788475 kubelet[2221]: E0416 01:04:24.766370 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:24.873898 kubelet[2221]: E0416 01:04:24.870329 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:24.976208 kubelet[2221]: E0416 01:04:24.975549 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:25.081543 kubelet[2221]: E0416 01:04:25.076624 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:25.200060 kubelet[2221]: E0416 01:04:25.192035 2221 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:04:25.267615 kubelet[2221]: I0416 01:04:25.264499 2221 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:04:25.542483 kubelet[2221]: I0416 01:04:25.526492 2221 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:04:25.701524 kubelet[2221]: I0416 01:04:25.700346 2221 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 01:04:25.854926 kubelet[2221]: I0416 01:04:25.848524 2221 apiserver.go:52] "Watching apiserver" Apr 16 01:04:25.926420 kubelet[2221]: E0416 01:04:25.915978 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:25.926420 kubelet[2221]: E0416 01:04:25.916541 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:25.995158 kubelet[2221]: I0416 01:04:25.970511 2221 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 01:04:26.002247 kubelet[2221]: E0416 01:04:26.002207 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:28.726059 kubelet[2221]: I0416 01:04:28.725576 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.725533844 podStartE2EDuration="3.725533844s" podCreationTimestamp="2026-04-16 01:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:04:28.696257512 +0000 UTC m=+58.989278395" watchObservedRunningTime="2026-04-16 01:04:28.725533844 +0000 UTC m=+59.018554727" Apr 16 01:04:29.140513 kubelet[2221]: I0416 01:04:29.137679 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.13766069 podStartE2EDuration="4.13766069s" podCreationTimestamp="2026-04-16 01:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:04:29.117956381 +0000 UTC m=+59.410977262" watchObservedRunningTime="2026-04-16 01:04:29.13766069 +0000 UTC m=+59.430681578" Apr 16 01:04:29.366073 kubelet[2221]: I0416 01:04:29.364019 2221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.363997253 podStartE2EDuration="4.363997253s" podCreationTimestamp="2026-04-16 01:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:04:29.363857249 +0000 UTC m=+59.656878141" watchObservedRunningTime="2026-04-16 01:04:29.363997253 +0000 UTC m=+59.657018140" Apr 16 01:04:35.200329 kubelet[2221]: E0416 01:04:35.199564 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:35.257257 systemd[1]: cri-containerd-b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6.scope: Deactivated successfully. Apr 16 01:04:35.262434 systemd[1]: cri-containerd-b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6.scope: Consumed 1.928s CPU time. Apr 16 01:04:36.047864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6-rootfs.mount: Deactivated successfully. Apr 16 01:04:36.082668 containerd[1456]: time="2026-04-16T01:04:36.081990424Z" level=info msg="shim disconnected" id=b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6 namespace=k8s.io Apr 16 01:04:36.082668 containerd[1456]: time="2026-04-16T01:04:36.082263622Z" level=warning msg="cleaning up after shim disconnected" id=b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6 namespace=k8s.io Apr 16 01:04:36.082668 containerd[1456]: time="2026-04-16T01:04:36.082279666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:04:37.784061 kubelet[2221]: I0416 01:04:37.772181 2221 scope.go:117] "RemoveContainer" containerID="b0b4ad7bc7f1403f69d066aba98dcda2b45e84d696bb91ab87778b263110b7c6" Apr 16 01:04:37.784061 kubelet[2221]: E0416 01:04:37.772549 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:37.987056 containerd[1456]: time="2026-04-16T01:04:37.986399597Z" level=info msg="CreateContainer within sandbox \"198d814eec32e4abd81ef4c4bb921ba43f8be02293d94919d490aa4752fcf8be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 01:04:38.647533 containerd[1456]: time="2026-04-16T01:04:38.615456654Z" level=info msg="CreateContainer within sandbox \"198d814eec32e4abd81ef4c4bb921ba43f8be02293d94919d490aa4752fcf8be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7c5e6b41c3a7ef2c4593de8e81de8e06bd05c8afaa256264e90c71cbbeccd6b8\"" Apr 16 01:04:38.727501 containerd[1456]: time="2026-04-16T01:04:38.723480150Z" level=info msg="StartContainer for \"7c5e6b41c3a7ef2c4593de8e81de8e06bd05c8afaa256264e90c71cbbeccd6b8\"" Apr 16 01:04:39.791451 systemd[1]: run-containerd-runc-k8s.io-7c5e6b41c3a7ef2c4593de8e81de8e06bd05c8afaa256264e90c71cbbeccd6b8-runc.YuyEgE.mount: Deactivated successfully. Apr 16 01:04:40.145417 systemd[1]: Started cri-containerd-7c5e6b41c3a7ef2c4593de8e81de8e06bd05c8afaa256264e90c71cbbeccd6b8.scope - libcontainer container 7c5e6b41c3a7ef2c4593de8e81de8e06bd05c8afaa256264e90c71cbbeccd6b8. Apr 16 01:04:41.084492 kubelet[2221]: E0416 01:04:41.067481 2221 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice/cri-containerd-7c5e6b41c3a7ef2c4593de8e81de8e06bd05c8afaa256264e90c71cbbeccd6b8.scope\": RecentStats: unable to find data in memory cache]" Apr 16 01:04:41.646913 containerd[1456]: time="2026-04-16T01:04:41.639347105Z" level=info msg="StartContainer for \"7c5e6b41c3a7ef2c4593de8e81de8e06bd05c8afaa256264e90c71cbbeccd6b8\" returns successfully" Apr 16 01:04:41.937344 kubelet[2221]: E0416 01:04:41.931451 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:43.159956 kubelet[2221]: E0416 01:04:43.159528 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:45.519490 systemd[1]: Reloading requested from client PID 2584 ('systemctl') (unit session-5.scope)... Apr 16 01:04:45.520992 systemd[1]: Reloading... Apr 16 01:04:46.579635 zram_generator::config[2626]: No configuration found. Apr 16 01:04:47.268037 kubelet[2221]: E0416 01:04:47.267621 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:48.050777 kubelet[2221]: E0416 01:04:48.040396 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:48.139046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:04:48.462320 kubelet[2221]: E0416 01:04:48.462189 2221 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:48.890236 systemd[1]: Reloading finished in 3367 ms. Apr 16 01:04:49.551196 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:04:49.762740 kubelet[2221]: I0416 01:04:49.570456 2221 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 01:04:49.841399 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 01:04:49.850150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:04:49.866677 systemd[1]: kubelet.service: Consumed 16.654s CPU time, 137.2M memory peak, 0B memory swap peak. Apr 16 01:04:50.037416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:04:52.401797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:04:52.490293 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 01:04:52.919226 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 01:04:52.919226 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:04:52.919226 kubelet[2668]: I0416 01:04:52.912085 2668 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 01:04:53.136679 kubelet[2668]: I0416 01:04:53.114944 2668 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 01:04:53.136679 kubelet[2668]: I0416 01:04:53.115065 2668 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 01:04:53.136679 kubelet[2668]: I0416 01:04:53.134695 2668 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 01:04:53.136679 kubelet[2668]: I0416 01:04:53.134875 2668 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 01:04:53.136679 kubelet[2668]: I0416 01:04:53.135727 2668 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 01:04:53.165084 kubelet[2668]: I0416 01:04:53.164001 2668 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 01:04:53.189389 kubelet[2668]: I0416 01:04:53.181994 2668 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 01:04:53.193580 kubelet[2668]: E0416 01:04:53.193172 2668 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 01:04:53.193580 kubelet[2668]: I0416 01:04:53.193270 2668 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 01:04:53.266862 kubelet[2668]: I0416 01:04:53.266087 2668 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 01:04:53.288236 kubelet[2668]: I0416 01:04:53.277782 2668 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 01:04:53.288236 kubelet[2668]: I0416 01:04:53.278196 2668 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 01:04:53.288236 kubelet[2668]: I0416 01:04:53.278584 2668 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 01:04:53.288236 kubelet[2668]: I0416 01:04:53.278600 2668 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 01:04:53.288684 kubelet[2668]: I0416 01:04:53.278716 2668 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 01:04:53.288684 kubelet[2668]: I0416 01:04:53.279512 2668 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:04:53.288684 kubelet[2668]: I0416 01:04:53.279932 2668 kubelet.go:475] "Attempting to sync node with API server" Apr 16 01:04:53.288684 kubelet[2668]: I0416 01:04:53.279956 2668 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 01:04:53.288684 kubelet[2668]: I0416 01:04:53.279984 2668 kubelet.go:387] "Adding apiserver pod source" Apr 16 01:04:53.288684 kubelet[2668]: I0416 01:04:53.279996 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 01:04:53.288684 kubelet[2668]: I0416 01:04:53.284741 2668 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 01:04:53.328251 kubelet[2668]: I0416 01:04:53.321291 2668 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 01:04:53.328251 kubelet[2668]: I0416 01:04:53.327420 2668 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 01:04:53.380314 kubelet[2668]: I0416 01:04:53.367238 2668 server.go:1262] "Started kubelet" Apr 16 01:04:53.384871 kubelet[2668]: I0416 01:04:53.383291 2668 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 01:04:53.384871 kubelet[2668]: I0416 01:04:53.383442 2668 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 01:04:53.384871 kubelet[2668]: I0416 01:04:53.384239 2668 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 01:04:53.387721 kubelet[2668]: I0416 01:04:53.383370 2668 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 01:04:53.388368 kubelet[2668]: I0416 01:04:53.388056 2668 server.go:310] "Adding debug handlers to kubelet server" Apr 16 01:04:53.419724 kubelet[2668]: I0416 01:04:53.394362 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 01:04:53.454950 kubelet[2668]: I0416 01:04:53.431425 2668 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 01:04:53.454950 kubelet[2668]: I0416 01:04:53.449349 2668 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 01:04:53.454950 kubelet[2668]: I0416 01:04:53.449540 2668 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 01:04:53.454950 kubelet[2668]: I0416 01:04:53.449900 2668 reconciler.go:29] "Reconciler: start to sync state" Apr 16 01:04:53.463206 kubelet[2668]: I0416 01:04:53.455633 2668 factory.go:223] Registration of the systemd container factory successfully Apr 16 01:04:53.463206 kubelet[2668]: I0416 01:04:53.455759 2668 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 01:04:53.473915 kubelet[2668]: I0416 01:04:53.473856 2668 factory.go:223] Registration of the containerd container factory successfully Apr 16 01:04:53.489618 kubelet[2668]: E0416 01:04:53.473877 2668 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 01:04:53.767232 kubelet[2668]: I0416 01:04:53.766437 2668 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 01:04:53.792966 kubelet[2668]: I0416 01:04:53.792799 2668 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 01:04:53.808075 kubelet[2668]: I0416 01:04:53.793555 2668 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:04:53.808075 kubelet[2668]: I0416 01:04:53.793779 2668 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 01:04:53.808075 kubelet[2668]: I0416 01:04:53.793882 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 01:04:53.808075 kubelet[2668]: I0416 01:04:53.793892 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 01:04:53.808075 kubelet[2668]: I0416 01:04:53.793909 2668 policy_none.go:49] "None policy: Start" Apr 16 01:04:53.808075 kubelet[2668]: I0416 01:04:53.793924 2668 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 01:04:53.808075 kubelet[2668]: I0416 01:04:53.793937 2668 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 01:04:53.808075 kubelet[2668]: I0416 01:04:53.794045 2668 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 01:04:53.808075 kubelet[2668]: I0416 01:04:53.794054 2668 policy_none.go:47] "Start" Apr 16 01:04:53.924310 kubelet[2668]: I0416 01:04:53.923832 2668 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 01:04:53.924310 kubelet[2668]: I0416 01:04:53.924023 2668 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 01:04:53.924310 kubelet[2668]: I0416 01:04:53.924184 2668 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 01:04:53.926898 kubelet[2668]: E0416 01:04:53.924643 2668 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 01:04:54.035875 kubelet[2668]: E0416 01:04:54.030513 2668 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 01:04:54.159232 kubelet[2668]: E0416 01:04:54.158825 2668 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 01:04:54.184716 kubelet[2668]: I0416 01:04:54.182692 2668 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 01:04:54.184716 kubelet[2668]: I0416 01:04:54.182717 2668 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 01:04:54.188366 kubelet[2668]: I0416 01:04:54.186867 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 01:04:54.205865 kubelet[2668]: E0416 01:04:54.199679 2668 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 01:04:54.244692 kubelet[2668]: I0416 01:04:54.243937 2668 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 01:04:54.246866 kubelet[2668]: I0416 01:04:54.245772 2668 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:04:54.246866 kubelet[2668]: I0416 01:04:54.245972 2668 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:04:54.328204 kubelet[2668]: I0416 01:04:54.300599 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:04:54.328204 kubelet[2668]: I0416 01:04:54.300698 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:04:54.328204 kubelet[2668]: I0416 01:04:54.300919 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a1193a8d9d8acec200f820e661eff93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a1193a8d9d8acec200f820e661eff93\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:04:54.328204 kubelet[2668]: I0416 01:04:54.300940 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a1193a8d9d8acec200f820e661eff93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a1193a8d9d8acec200f820e661eff93\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:04:54.328204 kubelet[2668]: I0416 01:04:54.300960 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:04:54.346163 kubelet[2668]: I0416 01:04:54.300978 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:04:54.346163 kubelet[2668]: I0416 01:04:54.300996 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:04:54.346163 kubelet[2668]: I0416 01:04:54.301013 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 01:04:54.346163 kubelet[2668]: I0416 01:04:54.301029 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a1193a8d9d8acec200f820e661eff93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a1193a8d9d8acec200f820e661eff93\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:04:54.346163 kubelet[2668]: I0416 01:04:54.303236 2668 apiserver.go:52] "Watching apiserver" Apr 16 01:04:54.363493 kubelet[2668]: I0416 01:04:54.352018 2668 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:04:54.453862 kubelet[2668]: I0416 01:04:54.450974 2668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 01:04:54.524981 kubelet[2668]: E0416 01:04:54.522583 2668 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:04:54.524981 kubelet[2668]: E0416 01:04:54.523618 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:54.524981 kubelet[2668]: E0416 01:04:54.523848 2668 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 16 01:04:54.524981 kubelet[2668]: E0416 01:04:54.524193 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:54.526198 kubelet[2668]: E0416 01:04:54.526174 2668 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 01:04:54.533655 kubelet[2668]: E0416 01:04:54.532644 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:54.601265 kubelet[2668]: I0416 01:04:54.600187 2668 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 01:04:54.601607 kubelet[2668]: I0416 01:04:54.601558 2668 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 01:04:55.320855 kubelet[2668]: E0416 01:04:55.318976 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:55.320855 kubelet[2668]: E0416 01:04:55.319851 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:55.320855 kubelet[2668]: E0416 01:04:55.320066 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:56.350489 kubelet[2668]: E0416 01:04:56.344846 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:56.350489 kubelet[2668]: E0416 01:04:56.349426 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:57.359405 kubelet[2668]: E0416 01:04:57.357066 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:04:58.380855 kubelet[2668]: E0416 01:04:58.377816 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:05:00.190670 kubelet[2668]: E0416 01:05:00.186394 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:05:00.599925 kubelet[2668]: E0416 01:05:00.596707 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:05:05.190959 kubelet[2668]: E0416 01:05:05.189579 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:05:09.689792 sudo[1589]: pam_unix(sudo:session): session closed for user root Apr 16 01:05:09.708120 sshd[1585]: pam_unix(sshd:session): session closed for user core Apr 16 01:05:09.768635 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:36268.service: Deactivated successfully. Apr 16 01:05:09.873403 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 01:05:09.874711 systemd[1]: session-5.scope: Consumed 18.574s CPU time, 162.5M memory peak, 0B memory swap peak. Apr 16 01:05:09.879646 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Apr 16 01:05:09.916873 systemd-logind[1443]: Removed session 5. Apr 16 01:05:38.838653 kubelet[2668]: I0416 01:05:38.833364 2668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 01:05:38.880830 containerd[1456]: time="2026-04-16T01:05:38.880387437Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 01:05:38.923843 kubelet[2668]: I0416 01:05:38.882086 2668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 01:05:44.108594 kubelet[2668]: I0416 01:05:44.101579 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d164ee23-f351-49e0-bc2a-92ba568e20dd-kube-proxy\") pod \"kube-proxy-8xhmq\" (UID: \"d164ee23-f351-49e0-bc2a-92ba568e20dd\") " pod="kube-system/kube-proxy-8xhmq" Apr 16 01:05:44.108594 kubelet[2668]: I0416 01:05:44.101983 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d164ee23-f351-49e0-bc2a-92ba568e20dd-xtables-lock\") pod \"kube-proxy-8xhmq\" (UID: \"d164ee23-f351-49e0-bc2a-92ba568e20dd\") " pod="kube-system/kube-proxy-8xhmq" Apr 16 01:05:44.108594 kubelet[2668]: I0416 01:05:44.102013 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d164ee23-f351-49e0-bc2a-92ba568e20dd-lib-modules\") pod \"kube-proxy-8xhmq\" (UID: \"d164ee23-f351-49e0-bc2a-92ba568e20dd\") " pod="kube-system/kube-proxy-8xhmq" Apr 16 01:05:44.108594 kubelet[2668]: I0416 01:05:44.102046 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmdd\" (UniqueName: \"kubernetes.io/projected/d164ee23-f351-49e0-bc2a-92ba568e20dd-kube-api-access-fgmdd\") pod \"kube-proxy-8xhmq\" (UID: \"d164ee23-f351-49e0-bc2a-92ba568e20dd\") " pod="kube-system/kube-proxy-8xhmq" Apr 16 01:05:44.684929 systemd[1]: Created slice kubepods-besteffort-podd164ee23_f351_49e0_bc2a_92ba568e20dd.slice - libcontainer container kubepods-besteffort-podd164ee23_f351_49e0_bc2a_92ba568e20dd.slice. Apr 16 01:05:45.453278 kubelet[2668]: I0416 01:05:45.452899 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/193fc8ea-4ced-4368-bc11-e247cc97226c-run\") pod \"kube-flannel-ds-7nmbx\" (UID: \"193fc8ea-4ced-4368-bc11-e247cc97226c\") " pod="kube-flannel/kube-flannel-ds-7nmbx" Apr 16 01:05:45.453278 kubelet[2668]: I0416 01:05:45.453235 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/193fc8ea-4ced-4368-bc11-e247cc97226c-cni\") pod \"kube-flannel-ds-7nmbx\" (UID: \"193fc8ea-4ced-4368-bc11-e247cc97226c\") " pod="kube-flannel/kube-flannel-ds-7nmbx" Apr 16 01:05:45.453278 kubelet[2668]: I0416 01:05:45.453271 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/193fc8ea-4ced-4368-bc11-e247cc97226c-cni-plugin\") pod \"kube-flannel-ds-7nmbx\" (UID: \"193fc8ea-4ced-4368-bc11-e247cc97226c\") " pod="kube-flannel/kube-flannel-ds-7nmbx" Apr 16 01:05:45.453278 kubelet[2668]: I0416 01:05:45.453292 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/193fc8ea-4ced-4368-bc11-e247cc97226c-flannel-cfg\") pod \"kube-flannel-ds-7nmbx\" (UID: \"193fc8ea-4ced-4368-bc11-e247cc97226c\") " pod="kube-flannel/kube-flannel-ds-7nmbx" Apr 16 01:05:45.494410 kubelet[2668]: I0416 01:05:45.453312 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/193fc8ea-4ced-4368-bc11-e247cc97226c-xtables-lock\") pod \"kube-flannel-ds-7nmbx\" (UID: \"193fc8ea-4ced-4368-bc11-e247cc97226c\") " pod="kube-flannel/kube-flannel-ds-7nmbx" Apr 16 01:05:45.494410 kubelet[2668]: I0416 01:05:45.453416 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l97cj\" (UniqueName: \"kubernetes.io/projected/193fc8ea-4ced-4368-bc11-e247cc97226c-kube-api-access-l97cj\") pod \"kube-flannel-ds-7nmbx\" (UID: \"193fc8ea-4ced-4368-bc11-e247cc97226c\") " pod="kube-flannel/kube-flannel-ds-7nmbx" Apr 16 01:05:45.611919 systemd[1]: Created slice kubepods-burstable-pod193fc8ea_4ced_4368_bc11_e247cc97226c.slice - libcontainer container kubepods-burstable-pod193fc8ea_4ced_4368_bc11_e247cc97226c.slice. Apr 16 01:05:45.833464 kubelet[2668]: E0416 01:05:45.816820 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:05:45.859475 containerd[1456]: time="2026-04-16T01:05:45.858476227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xhmq,Uid:d164ee23-f351-49e0-bc2a-92ba568e20dd,Namespace:kube-system,Attempt:0,}" Apr 16 01:05:47.128837 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1040200633 wd_nsec: 1040200689 Apr 16 01:05:47.355244 kubelet[2668]: E0416 01:05:47.352043 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:05:47.418050 containerd[1456]: time="2026-04-16T01:05:47.383913793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7nmbx,Uid:193fc8ea-4ced-4368-bc11-e247cc97226c,Namespace:kube-flannel,Attempt:0,}" Apr 16 01:05:48.000471 containerd[1456]: time="2026-04-16T01:05:47.925791040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:05:48.000471 containerd[1456]: time="2026-04-16T01:05:47.925994767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:05:48.000471 containerd[1456]: time="2026-04-16T01:05:47.926015262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:05:48.000471 containerd[1456]: time="2026-04-16T01:05:47.926663020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:05:48.726707 containerd[1456]: time="2026-04-16T01:05:48.717332099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:05:48.726707 containerd[1456]: time="2026-04-16T01:05:48.717492147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:05:48.726707 containerd[1456]: time="2026-04-16T01:05:48.717545276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:05:48.726707 containerd[1456]: time="2026-04-16T01:05:48.717683848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:05:49.293639 systemd[1]: Started cri-containerd-027e3d15bb1ba6178cc885621854b60ac08887aa4a2077970d593eec7e56964b.scope - libcontainer container 027e3d15bb1ba6178cc885621854b60ac08887aa4a2077970d593eec7e56964b. Apr 16 01:05:49.521039 systemd[1]: run-containerd-runc-k8s.io-7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691-runc.eY573J.mount: Deactivated successfully. Apr 16 01:05:49.663827 systemd[1]: Started cri-containerd-7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691.scope - libcontainer container 7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691. Apr 16 01:05:50.596242 containerd[1456]: time="2026-04-16T01:05:50.595951380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xhmq,Uid:d164ee23-f351-49e0-bc2a-92ba568e20dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"027e3d15bb1ba6178cc885621854b60ac08887aa4a2077970d593eec7e56964b\"" Apr 16 01:05:50.667402 kubelet[2668]: E0416 01:05:50.666488 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:05:50.908957 containerd[1456]: time="2026-04-16T01:05:50.880285751Z" level=info msg="CreateContainer within sandbox \"027e3d15bb1ba6178cc885621854b60ac08887aa4a2077970d593eec7e56964b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 01:05:51.034573 containerd[1456]: time="2026-04-16T01:05:51.034060722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7nmbx,Uid:193fc8ea-4ced-4368-bc11-e247cc97226c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691\"" Apr 16 01:05:51.040729 kubelet[2668]: E0416 01:05:51.040659 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:05:51.067482 containerd[1456]: time="2026-04-16T01:05:51.063850067Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 16 01:05:51.147708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692574203.mount: Deactivated successfully. Apr 16 01:05:51.174452 containerd[1456]: time="2026-04-16T01:05:51.166012534Z" level=info msg="CreateContainer within sandbox \"027e3d15bb1ba6178cc885621854b60ac08887aa4a2077970d593eec7e56964b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"85e396f7b490dc7967da27433f7155ef3fe493ce0838fa9803d757e7978c9646\"" Apr 16 01:05:51.212543 containerd[1456]: time="2026-04-16T01:05:51.212238394Z" level=info msg="StartContainer for \"85e396f7b490dc7967da27433f7155ef3fe493ce0838fa9803d757e7978c9646\"" Apr 16 01:05:51.814855 systemd[1]: Started cri-containerd-85e396f7b490dc7967da27433f7155ef3fe493ce0838fa9803d757e7978c9646.scope - libcontainer container 85e396f7b490dc7967da27433f7155ef3fe493ce0838fa9803d757e7978c9646. Apr 16 01:05:52.639906 containerd[1456]: time="2026-04-16T01:05:52.612261484Z" level=info msg="StartContainer for \"85e396f7b490dc7967da27433f7155ef3fe493ce0838fa9803d757e7978c9646\" returns successfully" Apr 16 01:05:53.155225 kubelet[2668]: E0416 01:05:53.136068 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:05:54.551435 kubelet[2668]: E0416 01:05:54.551245 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:01.312206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618788788.mount: Deactivated successfully. Apr 16 01:06:04.510148 containerd[1456]: time="2026-04-16T01:06:04.509690242Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:04.529955 containerd[1456]: time="2026-04-16T01:06:04.529568924Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Apr 16 01:06:04.548718 containerd[1456]: time="2026-04-16T01:06:04.540805617Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:04.945864 containerd[1456]: time="2026-04-16T01:06:04.943139883Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:04.982456 containerd[1456]: time="2026-04-16T01:06:04.981036337Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 13.91134227s" Apr 16 01:06:04.982456 containerd[1456]: time="2026-04-16T01:06:04.981193960Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 16 01:06:05.109568 containerd[1456]: time="2026-04-16T01:06:05.108805438Z" level=info msg="CreateContainer within sandbox \"7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 16 01:06:05.544067 containerd[1456]: time="2026-04-16T01:06:05.543359070Z" level=info msg="CreateContainer within sandbox \"7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"879e4bb72d2f7259d36da35884907c8ef254e754a8292934bc17666df75c4d90\"" Apr 16 01:06:05.560475 containerd[1456]: time="2026-04-16T01:06:05.559715407Z" level=info msg="StartContainer for \"879e4bb72d2f7259d36da35884907c8ef254e754a8292934bc17666df75c4d90\"" Apr 16 01:06:05.571341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953327716.mount: Deactivated successfully. Apr 16 01:06:06.702516 systemd[1]: Started cri-containerd-879e4bb72d2f7259d36da35884907c8ef254e754a8292934bc17666df75c4d90.scope - libcontainer container 879e4bb72d2f7259d36da35884907c8ef254e754a8292934bc17666df75c4d90. Apr 16 01:06:07.788735 systemd[1]: cri-containerd-879e4bb72d2f7259d36da35884907c8ef254e754a8292934bc17666df75c4d90.scope: Deactivated successfully. Apr 16 01:06:07.911278 containerd[1456]: time="2026-04-16T01:06:07.901850248Z" level=info msg="StartContainer for \"879e4bb72d2f7259d36da35884907c8ef254e754a8292934bc17666df75c4d90\" returns successfully" Apr 16 01:06:09.651546 kubelet[2668]: E0416 01:06:09.644454 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:10.283388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-879e4bb72d2f7259d36da35884907c8ef254e754a8292934bc17666df75c4d90-rootfs.mount: Deactivated successfully. Apr 16 01:06:10.492878 containerd[1456]: time="2026-04-16T01:06:10.487597535Z" level=info msg="shim disconnected" id=879e4bb72d2f7259d36da35884907c8ef254e754a8292934bc17666df75c4d90 namespace=k8s.io Apr 16 01:06:10.492878 containerd[1456]: time="2026-04-16T01:06:10.487985470Z" level=warning msg="cleaning up after shim disconnected" id=879e4bb72d2f7259d36da35884907c8ef254e754a8292934bc17666df75c4d90 namespace=k8s.io Apr 16 01:06:10.492878 containerd[1456]: time="2026-04-16T01:06:10.488010213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:06:11.019407 kubelet[2668]: E0416 01:06:11.018921 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:11.220131 kubelet[2668]: E0416 01:06:11.209730 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:11.599514 kubelet[2668]: I0416 01:06:11.567478 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8xhmq" podStartSLOduration=28.567456817 podStartE2EDuration="28.567456817s" podCreationTimestamp="2026-04-16 01:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:05:55.123289379 +0000 UTC m=+62.618896573" watchObservedRunningTime="2026-04-16 01:06:11.567456817 +0000 UTC m=+79.063063989" Apr 16 01:06:15.262940 kubelet[2668]: E0416 01:06:15.258669 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.118s" Apr 16 01:06:17.324469 kubelet[2668]: E0416 01:06:17.320697 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.02s" Apr 16 01:06:19.317709 kubelet[2668]: E0416 01:06:19.317206 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.996s" Apr 16 01:06:19.663622 kubelet[2668]: E0416 01:06:19.662253 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:19.677469 kubelet[2668]: E0416 01:06:19.663288 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:19.911753 containerd[1456]: time="2026-04-16T01:06:19.911420562Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 16 01:06:22.029644 kubelet[2668]: E0416 01:06:22.025161 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:32.807732 kubelet[2668]: E0416 01:06:32.807556 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.325s" Apr 16 01:06:34.119797 kubelet[2668]: E0416 01:06:34.118574 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.306s" Apr 16 01:06:53.087657 update_engine[1445]: I20260416 01:06:52.987649 1445 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 01:06:53.087657 update_engine[1445]: I20260416 01:06:53.066740 1445 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 01:06:53.138924 update_engine[1445]: I20260416 01:06:53.111460 1445 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 01:06:53.138924 update_engine[1445]: I20260416 01:06:53.135859 1445 omaha_request_params.cc:62] Current group set to lts Apr 16 01:06:53.138924 update_engine[1445]: I20260416 01:06:53.137195 1445 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 01:06:53.138924 update_engine[1445]: I20260416 01:06:53.137230 1445 update_attempter.cc:643] Scheduling an action processor start. Apr 16 01:06:53.138924 update_engine[1445]: I20260416 01:06:53.137255 1445 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 01:06:53.138924 update_engine[1445]: I20260416 01:06:53.137377 1445 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 01:06:53.138924 update_engine[1445]: I20260416 01:06:53.137943 1445 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 01:06:53.138924 update_engine[1445]: I20260416 01:06:53.137962 1445 omaha_request_action.cc:272] Request: Apr 16 01:06:53.138924 update_engine[1445]: Apr 16 01:06:53.138924 update_engine[1445]: Apr 16 01:06:53.138924 update_engine[1445]: Apr 16 01:06:53.138924 update_engine[1445]: Apr 16 01:06:53.138924 update_engine[1445]: Apr 16 01:06:53.138924 update_engine[1445]: Apr 16 01:06:53.138924 update_engine[1445]: Apr 16 01:06:53.138924 update_engine[1445]: Apr 16 01:06:53.138924 update_engine[1445]: I20260416 01:06:53.137970 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:06:53.148123 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 01:06:53.228321 update_engine[1445]: I20260416 01:06:53.228129 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:06:53.229938 update_engine[1445]: I20260416 01:06:53.229859 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:06:53.291865 update_engine[1445]: E20260416 01:06:53.258821 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:06:53.365025 update_engine[1445]: I20260416 01:06:53.265023 1445 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 01:06:53.783922 kubelet[2668]: E0416 01:06:53.778745 2668 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 16 01:06:54.228737 kubelet[2668]: E0416 01:06:54.227532 2668 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 01:06:59.294975 kubelet[2668]: E0416 01:06:59.288497 2668 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 01:06:59.993420 kubelet[2668]: E0416 01:06:59.990455 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:03.949742 update_engine[1445]: I20260416 01:07:03.938253 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:07:04.294741 update_engine[1445]: I20260416 01:07:04.115528 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:07:04.294741 update_engine[1445]: I20260416 01:07:04.116187 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:07:04.294741 update_engine[1445]: E20260416 01:07:04.168532 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:07:04.294741 update_engine[1445]: I20260416 01:07:04.170951 1445 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 01:07:04.363038 kubelet[2668]: E0416 01:07:04.361906 2668 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 01:07:06.434194 containerd[1456]: time="2026-04-16T01:07:06.427582068Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:07:06.434194 containerd[1456]: time="2026-04-16T01:07:06.434823991Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Apr 16 01:07:06.505719 containerd[1456]: time="2026-04-16T01:07:06.500535324Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:07:06.702245 containerd[1456]: time="2026-04-16T01:07:06.696900628Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:07:06.994200 containerd[1456]: time="2026-04-16T01:07:06.970754908Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 47.05694537s" Apr 16 01:07:06.994200 containerd[1456]: time="2026-04-16T01:07:06.971083194Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 16 01:07:07.803873 containerd[1456]: time="2026-04-16T01:07:07.803552525Z" level=info msg="CreateContainer within sandbox \"7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 01:07:09.038384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340074165.mount: Deactivated successfully. Apr 16 01:07:09.396867 containerd[1456]: time="2026-04-16T01:07:09.394504145Z" level=info msg="CreateContainer within sandbox \"7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469\"" Apr 16 01:07:09.469402 containerd[1456]: time="2026-04-16T01:07:09.401776530Z" level=info msg="StartContainer for \"84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469\"" Apr 16 01:07:09.472524 kubelet[2668]: E0416 01:07:09.400959 2668 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 01:07:11.251932 systemd[1]: run-containerd-runc-k8s.io-84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469-runc.YxKidW.mount: Deactivated successfully. Apr 16 01:07:11.737165 systemd[1]: Started cri-containerd-84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469.scope - libcontainer container 84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469. Apr 16 01:07:13.877314 containerd[1456]: time="2026-04-16T01:07:13.865280776Z" level=error msg="get state for 84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469" error="context deadline exceeded: unknown" Apr 16 01:07:13.877314 containerd[1456]: time="2026-04-16T01:07:13.867678723Z" level=warning msg="unknown status" status=0 Apr 16 01:07:14.512396 kubelet[2668]: E0416 01:07:14.511891 2668 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 01:07:14.896853 update_engine[1445]: I20260416 01:07:14.894887 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:07:14.896853 update_engine[1445]: I20260416 01:07:14.896556 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:07:14.896853 update_engine[1445]: I20260416 01:07:14.896821 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:07:14.958419 containerd[1456]: time="2026-04-16T01:07:14.957925699Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 01:07:15.068632 update_engine[1445]: E20260416 01:07:14.961002 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:07:15.068632 update_engine[1445]: I20260416 01:07:14.961921 1445 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 01:07:15.750277 systemd[1]: cri-containerd-84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469.scope: Deactivated successfully. Apr 16 01:07:15.791708 containerd[1456]: time="2026-04-16T01:07:15.770779287Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod193fc8ea_4ced_4368_bc11_e247cc97226c.slice/cri-containerd-84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469.scope/memory.events\": no such file or directory" Apr 16 01:07:15.849907 containerd[1456]: time="2026-04-16T01:07:15.849425980Z" level=info msg="StartContainer for \"84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469\" returns successfully" Apr 16 01:07:17.456072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469-rootfs.mount: Deactivated successfully. Apr 16 01:07:17.543320 kubelet[2668]: E0416 01:07:17.542971 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:17.833561 containerd[1456]: time="2026-04-16T01:07:17.817058356Z" level=info msg="shim disconnected" id=84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469 namespace=k8s.io Apr 16 01:07:17.833561 containerd[1456]: time="2026-04-16T01:07:17.831756434Z" level=warning msg="cleaning up after shim disconnected" id=84872911501914f0711a632ff0f14b75abfd1753dc1139aba0f4d05e9fc22469 namespace=k8s.io Apr 16 01:07:17.866651 containerd[1456]: time="2026-04-16T01:07:17.850649774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:07:20.843589 kubelet[2668]: E0416 01:07:20.837319 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:21.851636 containerd[1456]: time="2026-04-16T01:07:21.837476935Z" level=info msg="CreateContainer within sandbox \"7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 16 01:07:22.973554 kubelet[2668]: E0416 01:07:22.968964 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:23.835612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765723192.mount: Deactivated successfully. Apr 16 01:07:24.039988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400462789.mount: Deactivated successfully. Apr 16 01:07:24.423881 containerd[1456]: time="2026-04-16T01:07:24.422038210Z" level=info msg="CreateContainer within sandbox \"7c5eccbea31b714573445b46e7f4fbad5295a9f71d609c3f0c9b28b40d313691\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"88893eef440f10ff20d59e444751480f1265eb16a2d36a5385dee4b6c96227ac\"" Apr 16 01:07:24.633769 containerd[1456]: time="2026-04-16T01:07:24.629537467Z" level=info msg="StartContainer for \"88893eef440f10ff20d59e444751480f1265eb16a2d36a5385dee4b6c96227ac\"" Apr 16 01:07:24.940191 update_engine[1445]: I20260416 01:07:24.931393 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:07:24.975382 kubelet[2668]: E0416 01:07:24.974532 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.038s" Apr 16 01:07:25.035531 update_engine[1445]: I20260416 01:07:24.975397 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:07:25.035531 update_engine[1445]: I20260416 01:07:24.985034 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:07:25.167066 update_engine[1445]: E20260416 01:07:25.164765 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.174710 1445 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.180254 1445 omaha_request_action.cc:617] Omaha request response: Apr 16 01:07:25.299008 update_engine[1445]: E20260416 01:07:25.197342 1445 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.210716 1445 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.213064 1445 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.225546 1445 update_attempter.cc:306] Processing Done. Apr 16 01:07:25.299008 update_engine[1445]: E20260416 01:07:25.227924 1445 update_attempter.cc:619] Update failed. Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.227949 1445 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.227957 1445 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.227964 1445 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.249799 1445 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.250861 1445 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 01:07:25.299008 update_engine[1445]: I20260416 01:07:25.250880 1445 omaha_request_action.cc:272] Request: Apr 16 01:07:25.299008 update_engine[1445]: Apr 16 01:07:25.299008 update_engine[1445]: Apr 16 01:07:25.299008 update_engine[1445]: Apr 16 01:07:25.585296 update_engine[1445]: Apr 16 01:07:25.585296 update_engine[1445]: Apr 16 01:07:25.585296 update_engine[1445]: Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.250892 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.432791 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.458799 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:07:25.585296 update_engine[1445]: E20260416 01:07:25.508183 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.509294 1445 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.509321 1445 omaha_request_action.cc:617] Omaha request response: Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.509332 1445 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.509343 1445 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.509349 1445 update_attempter.cc:306] Processing Done. Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.509358 1445 update_attempter.cc:310] Error event sent. Apr 16 01:07:25.585296 update_engine[1445]: I20260416 01:07:25.509374 1445 update_check_scheduler.cc:74] Next update check in 45m33s Apr 16 01:07:25.713885 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 01:07:25.713885 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 01:07:27.994262 systemd[1]: Started cri-containerd-88893eef440f10ff20d59e444751480f1265eb16a2d36a5385dee4b6c96227ac.scope - libcontainer container 88893eef440f10ff20d59e444751480f1265eb16a2d36a5385dee4b6c96227ac. Apr 16 01:07:29.162692 kubelet[2668]: E0416 01:07:29.104957 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:30.011631 containerd[1456]: time="2026-04-16T01:07:30.002316450Z" level=error msg="get state for 88893eef440f10ff20d59e444751480f1265eb16a2d36a5385dee4b6c96227ac" error="context deadline exceeded: unknown" Apr 16 01:07:30.011631 containerd[1456]: time="2026-04-16T01:07:30.008827208Z" level=warning msg="unknown status" status=0 Apr 16 01:07:30.039959 kubelet[2668]: E0416 01:07:30.039143 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:30.756376 containerd[1456]: time="2026-04-16T01:07:30.753181783Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 01:07:31.413022 kubelet[2668]: I0416 01:07:31.412311 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzk56\" (UniqueName: \"kubernetes.io/projected/46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e-kube-api-access-nzk56\") pod \"coredns-66bc5c9577-d8stg\" (UID: \"46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e\") " pod="kube-system/coredns-66bc5c9577-d8stg" Apr 16 01:07:31.413022 kubelet[2668]: I0416 01:07:31.412547 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57b8307-c45a-4a1f-9631-c6266e5f824e-config-volume\") pod \"coredns-66bc5c9577-ss9g6\" (UID: \"d57b8307-c45a-4a1f-9631-c6266e5f824e\") " pod="kube-system/coredns-66bc5c9577-ss9g6" Apr 16 01:07:31.413022 kubelet[2668]: I0416 01:07:31.412651 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e-config-volume\") pod \"coredns-66bc5c9577-d8stg\" (UID: \"46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e\") " pod="kube-system/coredns-66bc5c9577-d8stg" Apr 16 01:07:31.413022 kubelet[2668]: I0416 01:07:31.412744 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgf5r\" (UniqueName: \"kubernetes.io/projected/d57b8307-c45a-4a1f-9631-c6266e5f824e-kube-api-access-kgf5r\") pod \"coredns-66bc5c9577-ss9g6\" (UID: \"d57b8307-c45a-4a1f-9631-c6266e5f824e\") " pod="kube-system/coredns-66bc5c9577-ss9g6" Apr 16 01:07:31.846381 systemd[1]: Created slice kubepods-burstable-podd57b8307_c45a_4a1f_9631_c6266e5f824e.slice - libcontainer container kubepods-burstable-podd57b8307_c45a_4a1f_9631_c6266e5f824e.slice. Apr 16 01:07:32.382213 systemd[1]: Created slice kubepods-burstable-pod46bb4d9e_a0a8_46f8_b332_ff7d26ef5e1e.slice - libcontainer container kubepods-burstable-pod46bb4d9e_a0a8_46f8_b332_ff7d26ef5e1e.slice. Apr 16 01:07:32.560303 containerd[1456]: time="2026-04-16T01:07:32.556321538Z" level=info msg="StartContainer for \"88893eef440f10ff20d59e444751480f1265eb16a2d36a5385dee4b6c96227ac\" returns successfully" Apr 16 01:07:33.764626 kubelet[2668]: E0416 01:07:33.764226 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:33.826885 containerd[1456]: time="2026-04-16T01:07:33.826507708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ss9g6,Uid:d57b8307-c45a-4a1f-9631-c6266e5f824e,Namespace:kube-system,Attempt:0,}" Apr 16 01:07:34.080935 kubelet[2668]: E0416 01:07:34.061664 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:34.121329 containerd[1456]: time="2026-04-16T01:07:34.116705204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d8stg,Uid:46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e,Namespace:kube-system,Attempt:0,}" Apr 16 01:07:34.661704 kubelet[2668]: E0416 01:07:34.661054 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:35.931953 systemd[1]: run-netns-cni\x2db4bdce48\x2d61f6\x2d9c78\x2d392c\x2d952ad414e7f6.mount: Deactivated successfully. Apr 16 01:07:35.962795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c127b3113749f5447b40bff97cc2e26dca6dff5081de4c66db1af1c4a70954d-shm.mount: Deactivated successfully. Apr 16 01:07:36.369832 containerd[1456]: time="2026-04-16T01:07:36.274867534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ss9g6,Uid:d57b8307-c45a-4a1f-9631-c6266e5f824e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c127b3113749f5447b40bff97cc2e26dca6dff5081de4c66db1af1c4a70954d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 16 01:07:36.386255 kubelet[2668]: E0416 01:07:36.385983 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c127b3113749f5447b40bff97cc2e26dca6dff5081de4c66db1af1c4a70954d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 16 01:07:36.452822 kubelet[2668]: E0416 01:07:36.451566 2668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c127b3113749f5447b40bff97cc2e26dca6dff5081de4c66db1af1c4a70954d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-ss9g6" Apr 16 01:07:36.452822 kubelet[2668]: E0416 01:07:36.451621 2668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c127b3113749f5447b40bff97cc2e26dca6dff5081de4c66db1af1c4a70954d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-ss9g6" Apr 16 01:07:36.452822 kubelet[2668]: E0416 01:07:36.451830 2668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ss9g6_kube-system(d57b8307-c45a-4a1f-9631-c6266e5f824e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ss9g6_kube-system(d57b8307-c45a-4a1f-9631-c6266e5f824e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c127b3113749f5447b40bff97cc2e26dca6dff5081de4c66db1af1c4a70954d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-ss9g6" podUID="d57b8307-c45a-4a1f-9631-c6266e5f824e" Apr 16 01:07:38.760184 systemd[1]: run-netns-cni\x2dfebf96f9\x2db7e3\x2d01c9\x2db70b\x2da53b456b9b8e.mount: Deactivated successfully. Apr 16 01:07:38.857337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4747302ea5f21389583aaf7e27894d04b41ade2784dbf57a2ad73d6b267cebf-shm.mount: Deactivated successfully. Apr 16 01:07:38.902967 containerd[1456]: time="2026-04-16T01:07:38.900641333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d8stg,Uid:46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4747302ea5f21389583aaf7e27894d04b41ade2784dbf57a2ad73d6b267cebf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 16 01:07:38.974647 kubelet[2668]: E0416 01:07:38.963961 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4747302ea5f21389583aaf7e27894d04b41ade2784dbf57a2ad73d6b267cebf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 16 01:07:38.994186 kubelet[2668]: E0416 01:07:38.970472 2668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4747302ea5f21389583aaf7e27894d04b41ade2784dbf57a2ad73d6b267cebf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-d8stg" Apr 16 01:07:39.002467 kubelet[2668]: E0416 01:07:38.994913 2668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4747302ea5f21389583aaf7e27894d04b41ade2784dbf57a2ad73d6b267cebf\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-d8stg" Apr 16 01:07:39.023913 kubelet[2668]: E0416 01:07:39.017581 2668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-d8stg_kube-system(46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-d8stg_kube-system(46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4747302ea5f21389583aaf7e27894d04b41ade2784dbf57a2ad73d6b267cebf\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-d8stg" podUID="46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e" Apr 16 01:07:42.495048 kubelet[2668]: E0416 01:07:42.488929 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.496s" Apr 16 01:07:42.518590 systemd-networkd[1376]: flannel.1: Link UP Apr 16 01:07:42.518595 systemd-networkd[1376]: flannel.1: Gained carrier Apr 16 01:07:44.448535 systemd-networkd[1376]: flannel.1: Gained IPv6LL Apr 16 01:07:50.098240 kubelet[2668]: E0416 01:07:50.096017 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:50.110522 containerd[1456]: time="2026-04-16T01:07:50.108389989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d8stg,Uid:46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e,Namespace:kube-system,Attempt:0,}" Apr 16 01:07:51.060665 kubelet[2668]: E0416 01:07:51.059959 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:51.068786 containerd[1456]: time="2026-04-16T01:07:51.068739136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ss9g6,Uid:d57b8307-c45a-4a1f-9631-c6266e5f824e,Namespace:kube-system,Attempt:0,}" Apr 16 01:07:51.512776 systemd-networkd[1376]: cni0: Link UP Apr 16 01:07:51.644145 kernel: cni0: port 1(veth501404b3) entered blocking state Apr 16 01:07:51.644287 kernel: cni0: port 1(veth501404b3) entered disabled state Apr 16 01:07:51.638722 systemd-networkd[1376]: veth501404b3: Link UP Apr 16 01:07:51.662078 kernel: veth501404b3: entered allmulticast mode Apr 16 01:07:51.700631 kernel: veth501404b3: entered promiscuous mode Apr 16 01:07:51.773748 kernel: cni0: port 1(veth501404b3) entered blocking state Apr 16 01:07:51.774345 kernel: cni0: port 1(veth501404b3) entered forwarding state Apr 16 01:07:51.774539 kernel: cni0: port 1(veth501404b3) entered disabled state Apr 16 01:07:51.944881 systemd-networkd[1376]: veth7aebfd06: Link UP Apr 16 01:07:51.954954 kernel: cni0: port 1(veth501404b3) entered blocking state Apr 16 01:07:51.958561 kernel: cni0: port 1(veth501404b3) entered forwarding state Apr 16 01:07:51.959501 systemd-networkd[1376]: veth501404b3: Gained carrier Apr 16 01:07:51.969451 systemd-networkd[1376]: cni0: Gained carrier Apr 16 01:07:51.972891 kernel: cni0: port 2(veth7aebfd06) entered blocking state Apr 16 01:07:51.972992 kernel: cni0: port 2(veth7aebfd06) entered disabled state Apr 16 01:07:51.978192 kernel: veth7aebfd06: entered allmulticast mode Apr 16 01:07:51.989889 kernel: veth7aebfd06: entered promiscuous mode Apr 16 01:07:52.014507 containerd[1456]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000100038), "name":"cbr0", "type":"bridge"} Apr 16 01:07:52.014507 containerd[1456]: delegateAdd: netconf sent to delegate plugin: Apr 16 01:07:52.067743 kernel: cni0: port 2(veth7aebfd06) entered blocking state Apr 16 01:07:52.068260 kernel: cni0: port 2(veth7aebfd06) entered forwarding state Apr 16 01:07:52.053986 systemd-networkd[1376]: veth7aebfd06: Gained carrier Apr 16 01:07:52.190665 containerd[1456]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Apr 16 01:07:52.190665 containerd[1456]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Apr 16 01:07:52.190665 containerd[1456]: delegateAdd: netconf sent to delegate plugin: Apr 16 01:07:52.510582 containerd[1456]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-16T01:07:52.509313802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:07:52.510582 containerd[1456]: time="2026-04-16T01:07:52.509500058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:07:52.510582 containerd[1456]: time="2026-04-16T01:07:52.509517470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:07:52.510582 containerd[1456]: time="2026-04-16T01:07:52.509662164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:07:52.642832 containerd[1456]: time="2026-04-16T01:07:52.635741915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:07:52.651284 containerd[1456]: time="2026-04-16T01:07:52.647051847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:07:52.667982 containerd[1456]: time="2026-04-16T01:07:52.659696452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:07:52.702543 containerd[1456]: time="2026-04-16T01:07:52.687935034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:07:52.818253 systemd[1]: Started cri-containerd-3f3fa23ee197e25287b958d530ddf1435ac7fad64cd8e560888e2945af1de23a.scope - libcontainer container 3f3fa23ee197e25287b958d530ddf1435ac7fad64cd8e560888e2945af1de23a. Apr 16 01:07:52.907460 systemd[1]: Started cri-containerd-6eaf19c28979ec0c74e729519585fa47482e368cc930e16c227cc2b45c5e4ba9.scope - libcontainer container 6eaf19c28979ec0c74e729519585fa47482e368cc930e16c227cc2b45c5e4ba9. Apr 16 01:07:53.114912 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:07:53.167994 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:07:53.458482 containerd[1456]: time="2026-04-16T01:07:53.456943672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d8stg,Uid:46bb4d9e-a0a8-46f8-b332-ff7d26ef5e1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f3fa23ee197e25287b958d530ddf1435ac7fad64cd8e560888e2945af1de23a\"" Apr 16 01:07:53.464055 kubelet[2668]: E0416 01:07:53.463980 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:53.520522 containerd[1456]: time="2026-04-16T01:07:53.520276236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ss9g6,Uid:d57b8307-c45a-4a1f-9631-c6266e5f824e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eaf19c28979ec0c74e729519585fa47482e368cc930e16c227cc2b45c5e4ba9\"" Apr 16 01:07:53.580250 kubelet[2668]: E0416 01:07:53.579922 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:53.597306 containerd[1456]: time="2026-04-16T01:07:53.580978208Z" level=info msg="CreateContainer within sandbox \"3f3fa23ee197e25287b958d530ddf1435ac7fad64cd8e560888e2945af1de23a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 01:07:53.664186 systemd-networkd[1376]: veth501404b3: Gained IPv6LL Apr 16 01:07:53.944323 systemd-networkd[1376]: cni0: Gained IPv6LL Apr 16 01:07:54.052469 systemd-networkd[1376]: veth7aebfd06: Gained IPv6LL Apr 16 01:07:54.090007 containerd[1456]: time="2026-04-16T01:07:54.085875532Z" level=info msg="CreateContainer within sandbox \"6eaf19c28979ec0c74e729519585fa47482e368cc930e16c227cc2b45c5e4ba9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 01:07:54.228244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036474030.mount: Deactivated successfully. Apr 16 01:07:54.402124 containerd[1456]: time="2026-04-16T01:07:54.401593679Z" level=info msg="CreateContainer within sandbox \"3f3fa23ee197e25287b958d530ddf1435ac7fad64cd8e560888e2945af1de23a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c693276863b16ffe16da7c0bdd76e309ea6503104f3f83e0d699cacf7aa5f9d8\"" Apr 16 01:07:54.490363 containerd[1456]: time="2026-04-16T01:07:54.489918677Z" level=info msg="StartContainer for \"c693276863b16ffe16da7c0bdd76e309ea6503104f3f83e0d699cacf7aa5f9d8\"" Apr 16 01:07:54.634674 containerd[1456]: time="2026-04-16T01:07:54.624537763Z" level=info msg="CreateContainer within sandbox \"6eaf19c28979ec0c74e729519585fa47482e368cc930e16c227cc2b45c5e4ba9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9164e85d1155b099aae938f02f5992fda2186409999c5b7cde4f540d25b7877\"" Apr 16 01:07:54.643781 containerd[1456]: time="2026-04-16T01:07:54.641907339Z" level=info msg="StartContainer for \"b9164e85d1155b099aae938f02f5992fda2186409999c5b7cde4f540d25b7877\"" Apr 16 01:07:54.848919 systemd[1]: Started cri-containerd-b9164e85d1155b099aae938f02f5992fda2186409999c5b7cde4f540d25b7877.scope - libcontainer container b9164e85d1155b099aae938f02f5992fda2186409999c5b7cde4f540d25b7877. Apr 16 01:07:54.934177 systemd[1]: Started cri-containerd-c693276863b16ffe16da7c0bdd76e309ea6503104f3f83e0d699cacf7aa5f9d8.scope - libcontainer container c693276863b16ffe16da7c0bdd76e309ea6503104f3f83e0d699cacf7aa5f9d8. Apr 16 01:07:55.600538 containerd[1456]: time="2026-04-16T01:07:55.600305048Z" level=info msg="StartContainer for \"c693276863b16ffe16da7c0bdd76e309ea6503104f3f83e0d699cacf7aa5f9d8\" returns successfully" Apr 16 01:07:55.693297 containerd[1456]: time="2026-04-16T01:07:55.691207171Z" level=info msg="StartContainer for \"b9164e85d1155b099aae938f02f5992fda2186409999c5b7cde4f540d25b7877\" returns successfully" Apr 16 01:07:56.289905 kubelet[2668]: E0416 01:07:56.285913 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:56.300392 kubelet[2668]: E0416 01:07:56.300311 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:56.459886 kubelet[2668]: I0416 01:07:56.452867 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-7nmbx" podStartSLOduration=56.787026387 podStartE2EDuration="2m13.452846372s" podCreationTimestamp="2026-04-16 01:05:43 +0000 UTC" firstStartedPulling="2026-04-16 01:05:51.041832107 +0000 UTC m=+58.537439268" lastFinishedPulling="2026-04-16 01:07:07.707652073 +0000 UTC m=+135.203259253" observedRunningTime="2026-04-16 01:07:42.44757362 +0000 UTC m=+169.943180782" watchObservedRunningTime="2026-04-16 01:07:56.452846372 +0000 UTC m=+183.948453536" Apr 16 01:07:56.459886 kubelet[2668]: I0416 01:07:56.453049 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ss9g6" podStartSLOduration=133.453042772 podStartE2EDuration="2m13.453042772s" podCreationTimestamp="2026-04-16 01:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:07:56.452601221 +0000 UTC m=+183.948208395" watchObservedRunningTime="2026-04-16 01:07:56.453042772 +0000 UTC m=+183.948649932" Apr 16 01:07:57.381895 kubelet[2668]: E0416 01:07:57.379776 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:57.390408 kubelet[2668]: E0416 01:07:57.387046 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:58.134758 kubelet[2668]: I0416 01:07:58.130275 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d8stg" podStartSLOduration=135.130256095 podStartE2EDuration="2m15.130256095s" podCreationTimestamp="2026-04-16 01:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:07:56.711135173 +0000 UTC m=+184.206742332" watchObservedRunningTime="2026-04-16 01:07:58.130256095 +0000 UTC m=+185.625863265" Apr 16 01:07:58.473420 kubelet[2668]: E0416 01:07:58.472992 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:58.476550 kubelet[2668]: E0416 01:07:58.476164 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:59.497821 kubelet[2668]: E0416 01:07:59.497583 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:17.088474 kubelet[2668]: E0416 01:08:17.076688 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:42.976983 kubelet[2668]: E0416 01:08:42.953025 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:44.149013 kubelet[2668]: E0416 01:08:44.147270 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:49.932291 kubelet[2668]: E0416 01:08:49.932175 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:55.464413 kubelet[2668]: E0416 01:08:55.462590 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.29s" Apr 16 01:09:04.315650 kubelet[2668]: E0416 01:09:04.308426 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.361s" Apr 16 01:09:04.809434 kubelet[2668]: E0416 01:09:04.808954 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:07.641539 kubelet[2668]: E0416 01:09:07.640335 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.688s" Apr 16 01:09:11.026173 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:45258.service - OpenSSH per-connection server daemon (10.0.0.1:45258). Apr 16 01:09:12.431709 sshd[3854]: Accepted publickey for core from 10.0.0.1 port 45258 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:09:13.722986 sshd[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:09:14.180017 systemd-logind[1443]: New session 6 of user core. Apr 16 01:09:14.230903 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 01:09:19.456477 kubelet[2668]: E0416 01:09:19.449345 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:21.947726 kubelet[2668]: E0416 01:09:21.947415 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:24.641932 sshd[3854]: pam_unix(sshd:session): session closed for user core Apr 16 01:09:24.810226 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Apr 16 01:09:24.841775 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:45258.service: Deactivated successfully. Apr 16 01:09:25.032604 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 01:09:25.033740 systemd[1]: session-6.scope: Consumed 2.188s CPU time. Apr 16 01:09:25.050550 systemd-logind[1443]: Removed session 6. Apr 16 01:09:29.689447 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:36336.service - OpenSSH per-connection server daemon (10.0.0.1:36336). Apr 16 01:09:29.791682 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 36336 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:09:29.824592 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:09:29.846523 systemd-logind[1443]: New session 7 of user core. Apr 16 01:09:29.862561 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 01:09:29.952984 kubelet[2668]: E0416 01:09:29.948806 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:30.405343 sshd[3932]: pam_unix(sshd:session): session closed for user core Apr 16 01:09:30.415087 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:36336.service: Deactivated successfully. Apr 16 01:09:30.419760 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 01:09:30.424420 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Apr 16 01:09:30.425580 systemd-logind[1443]: Removed session 7. Apr 16 01:09:35.505969 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:50406.service - OpenSSH per-connection server daemon (10.0.0.1:50406). Apr 16 01:09:35.645477 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 50406 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:09:35.647147 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:09:35.658283 systemd-logind[1443]: New session 8 of user core. Apr 16 01:09:35.670651 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 01:09:36.733622 sshd[3968]: pam_unix(sshd:session): session closed for user core Apr 16 01:09:36.794741 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:50406.service: Deactivated successfully. Apr 16 01:09:36.846150 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 01:09:36.850499 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Apr 16 01:09:36.851983 systemd-logind[1443]: Removed session 8. Apr 16 01:09:41.788147 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:50416.service - OpenSSH per-connection server daemon (10.0.0.1:50416). Apr 16 01:09:41.889161 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 50416 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:09:41.890966 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:09:41.895262 systemd-logind[1443]: New session 9 of user core. Apr 16 01:09:41.906618 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 01:09:42.208459 sshd[4003]: pam_unix(sshd:session): session closed for user core Apr 16 01:09:42.235672 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:50416.service: Deactivated successfully. Apr 16 01:09:42.245268 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 01:09:42.246310 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Apr 16 01:09:42.256356 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:50426.service - OpenSSH per-connection server daemon (10.0.0.1:50426). Apr 16 01:09:42.266562 systemd-logind[1443]: Removed session 9. Apr 16 01:09:42.487523 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 50426 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:09:42.509978 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:09:42.545322 systemd-logind[1443]: New session 10 of user core. Apr 16 01:09:42.587675 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 01:09:43.349026 sshd[4020]: pam_unix(sshd:session): session closed for user core Apr 16 01:09:43.499855 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:50426.service: Deactivated successfully. Apr 16 01:09:43.573757 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 01:09:43.575862 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Apr 16 01:09:43.674297 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:50428.service - OpenSSH per-connection server daemon (10.0.0.1:50428). Apr 16 01:09:43.702927 systemd-logind[1443]: Removed session 10. Apr 16 01:09:44.349541 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 50428 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:09:44.574855 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:09:44.680276 systemd-logind[1443]: New session 11 of user core. Apr 16 01:09:44.733867 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 01:09:46.102279 sshd[4034]: pam_unix(sshd:session): session closed for user core Apr 16 01:09:46.119741 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:50428.service: Deactivated successfully. Apr 16 01:09:46.126327 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 01:09:46.127958 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Apr 16 01:09:46.141912 systemd-logind[1443]: Removed session 11. Apr 16 01:09:52.845287 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:54316.service - OpenSSH per-connection server daemon (10.0.0.1:54316). Apr 16 01:09:53.521386 kubelet[2668]: E0416 01:09:53.478870 2668 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.476s" Apr 16 01:09:53.549040 kubelet[2668]: E0416 01:09:53.522064 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:53.923360 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 54316 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:09:53.939961 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:09:54.132432 systemd-logind[1443]: New session 12 of user core. Apr 16 01:09:54.230360 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 01:09:55.174157 sshd[4078]: pam_unix(sshd:session): session closed for user core Apr 16 01:09:55.181836 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:54316.service: Deactivated successfully. Apr 16 01:09:55.210394 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 01:09:55.228207 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Apr 16 01:09:55.233145 systemd-logind[1443]: Removed session 12. Apr 16 01:09:56.949783 kubelet[2668]: E0416 01:09:56.949059 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:00.950629 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:53772.service - OpenSSH per-connection server daemon (10.0.0.1:53772). Apr 16 01:10:01.568423 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 53772 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:10:01.584012 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:01.604914 systemd-logind[1443]: New session 13 of user core. Apr 16 01:10:01.652723 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 01:10:02.149954 sshd[4132]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:02.168776 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:53772.service: Deactivated successfully. Apr 16 01:10:02.183981 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 01:10:02.184782 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Apr 16 01:10:02.185859 systemd-logind[1443]: Removed session 13. Apr 16 01:10:04.959383 kubelet[2668]: E0416 01:10:04.958565 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:06.930177 kubelet[2668]: E0416 01:10:06.929822 2668 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:07.193626 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:42536.service - OpenSSH per-connection server daemon (10.0.0.1:42536). Apr 16 01:10:07.535556 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 42536 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:10:07.546976 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:07.649987 systemd-logind[1443]: New session 14 of user core. Apr 16 01:10:07.718965 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 01:10:09.206280 sshd[4166]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:09.359449 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:42536.service: Deactivated successfully. Apr 16 01:10:09.370389 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 01:10:09.372357 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Apr 16 01:10:09.382348 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:42550.service - OpenSSH per-connection server daemon (10.0.0.1:42550). Apr 16 01:10:09.385006 systemd-logind[1443]: Removed session 14. Apr 16 01:10:09.532613 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 42550 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:10:09.541025 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:09.569477 systemd-logind[1443]: New session 15 of user core. Apr 16 01:10:09.573442 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 01:10:10.772837 sshd[4181]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:10.952726 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:42550.service: Deactivated successfully. Apr 16 01:10:10.985678 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 01:10:11.010487 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Apr 16 01:10:11.074753 systemd-logind[1443]: Removed session 15. Apr 16 01:10:11.243578 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:42558.service - OpenSSH per-connection server daemon (10.0.0.1:42558). Apr 16 01:10:11.614607 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 42558 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:10:11.626165 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:11.643241 systemd-logind[1443]: New session 16 of user core. Apr 16 01:10:11.652864 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 01:10:13.061709 sshd[4215]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:13.080450 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:42558.service: Deactivated successfully. Apr 16 01:10:13.083799 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 01:10:13.087815 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Apr 16 01:10:13.124065 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:42566.service - OpenSSH per-connection server daemon (10.0.0.1:42566). Apr 16 01:10:13.144693 systemd-logind[1443]: Removed session 16. Apr 16 01:10:13.288288 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 42566 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:10:13.294667 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:13.315241 systemd-logind[1443]: New session 17 of user core. Apr 16 01:10:13.326281 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 01:10:13.966606 sshd[4234]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:13.987950 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:42566.service: Deactivated successfully. Apr 16 01:10:13.990837 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 01:10:13.999713 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Apr 16 01:10:14.007847 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:42582.service - OpenSSH per-connection server daemon (10.0.0.1:42582). Apr 16 01:10:14.009038 systemd-logind[1443]: Removed session 17. Apr 16 01:10:14.088072 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 42582 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:10:14.093088 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:14.101017 systemd-logind[1443]: New session 18 of user core. Apr 16 01:10:14.160567 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 01:10:14.710069 sshd[4246]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:14.741986 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:42582.service: Deactivated successfully. Apr 16 01:10:14.760611 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 01:10:14.767956 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Apr 16 01:10:14.772537 systemd-logind[1443]: Removed session 18. Apr 16 01:10:19.854457 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:53888.service - OpenSSH per-connection server daemon (10.0.0.1:53888). Apr 16 01:10:20.018139 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 53888 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:10:20.041862 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:20.076674 systemd-logind[1443]: New session 19 of user core. Apr 16 01:10:20.100951 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 01:10:20.555805 sshd[4284]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:20.560184 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:53888.service: Deactivated successfully. Apr 16 01:10:20.586688 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 01:10:20.595668 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Apr 16 01:10:20.603432 systemd-logind[1443]: Removed session 19. Apr 16 01:10:25.597712 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:37944.service - OpenSSH per-connection server daemon (10.0.0.1:37944). Apr 16 01:10:25.648983 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 37944 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:10:25.650780 sshd[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:25.661137 systemd-logind[1443]: New session 20 of user core. Apr 16 01:10:25.670334 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 01:10:25.862211 sshd[4327]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:25.865694 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:37944.service: Deactivated successfully. Apr 16 01:10:25.867818 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 01:10:25.869231 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Apr 16 01:10:25.873370 systemd-logind[1443]: Removed session 20. Apr 16 01:10:30.893684 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:37960.service - OpenSSH per-connection server daemon (10.0.0.1:37960). Apr 16 01:10:30.931412 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 37960 ssh2: RSA SHA256:I743H0tSjgd2sqZ10Lz6JNdpx4qGRI3TVgJ87hZylxs Apr 16 01:10:30.934397 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:30.939631 systemd-logind[1443]: New session 21 of user core. Apr 16 01:10:30.951376 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 01:10:31.179687 sshd[4364]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:31.187176 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:37960.service: Deactivated successfully. Apr 16 01:10:31.189066 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 01:10:31.189879 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Apr 16 01:10:31.190775 systemd-logind[1443]: Removed session 21.