Apr 14 00:15:03.532297 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 00:15:03.532346 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:15:03.532361 kernel: BIOS-provided physical RAM map: Apr 14 00:15:03.532369 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 14 00:15:03.532377 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 14 00:15:03.532384 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 14 00:15:03.532394 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 14 00:15:03.532466 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 14 00:15:03.532476 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 14 00:15:03.532484 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 14 00:15:03.532499 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 14 00:15:03.532506 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 14 00:15:03.532514 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 14 00:15:03.532522 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 14 00:15:03.532532 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 14 00:15:03.532541 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 14 00:15:03.532552 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 14 00:15:03.532560 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 14 00:15:03.532568 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 14 00:15:03.532575 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 00:15:03.532584 kernel: NX (Execute Disable) protection: active Apr 14 00:15:03.532592 kernel: APIC: Static calls initialized Apr 14 00:15:03.532600 kernel: efi: EFI v2.7 by EDK II Apr 14 00:15:03.532608 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 14 00:15:03.532617 kernel: SMBIOS 2.8 present. Apr 14 00:15:03.532625 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 14 00:15:03.532633 kernel: Hypervisor detected: KVM Apr 14 00:15:03.532646 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 00:15:03.532654 kernel: kvm-clock: using sched offset of 8887176635 cycles Apr 14 00:15:03.532664 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 00:15:03.532672 kernel: tsc: Detected 2793.438 MHz processor Apr 14 00:15:03.532681 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 00:15:03.532690 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 00:15:03.532699 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 14 00:15:03.532707 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 14 00:15:03.532733 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 00:15:03.532744 kernel: Using GB pages for direct mapping Apr 14 00:15:03.532768 kernel: Secure boot disabled Apr 14 00:15:03.532778 kernel: ACPI: Early table checksum verification disabled Apr 14 00:15:03.532810 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 14 00:15:03.532855 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 14 00:15:03.532878 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:15:03.532903 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:15:03.532930 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 14 00:15:03.532940 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:15:03.532949 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:15:03.532958 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:15:03.532967 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:15:03.532976 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 14 00:15:03.532985 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 14 00:15:03.532995 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 14 00:15:03.533003 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 14 00:15:03.533010 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 14 00:15:03.533018 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 14 00:15:03.533025 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 14 00:15:03.533033 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 14 00:15:03.533041 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 14 00:15:03.533049 kernel: No NUMA configuration found Apr 14 00:15:03.533058 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 14 00:15:03.533069 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 14 00:15:03.533078 kernel: Zone ranges: Apr 14 00:15:03.533088 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 00:15:03.533096 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 14 00:15:03.533105 kernel: Normal empty Apr 14 00:15:03.533114 kernel: Movable zone start for each node Apr 14 00:15:03.533123 kernel: Early memory node ranges Apr 14 00:15:03.533132 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 14 00:15:03.533141 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 14 00:15:03.533151 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 14 00:15:03.533162 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 14 00:15:03.533171 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 14 00:15:03.533180 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 14 00:15:03.533189 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 14 00:15:03.533198 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 00:15:03.533207 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 14 00:15:03.533216 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 14 00:15:03.533225 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 00:15:03.533234 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 14 00:15:03.533245 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 14 00:15:03.533255 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 14 00:15:03.533264 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 00:15:03.533273 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 00:15:03.533282 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 00:15:03.533291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 00:15:03.533300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 00:15:03.533309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 00:15:03.533318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 00:15:03.533330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 00:15:03.533339 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 00:15:03.533348 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 00:15:03.533357 kernel: TSC deadline timer available Apr 14 00:15:03.533366 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 00:15:03.533375 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 00:15:03.533384 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 00:15:03.533393 kernel: kvm-guest: setup PV sched yield Apr 14 00:15:03.533475 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 14 00:15:03.533506 kernel: Booting paravirtualized kernel on KVM Apr 14 00:15:03.533516 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 00:15:03.533526 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 00:15:03.533535 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 00:15:03.533545 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 00:15:03.533554 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 00:15:03.533562 kernel: kvm-guest: PV spinlocks enabled Apr 14 00:15:03.533570 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 00:15:03.533579 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:15:03.533593 kernel: random: crng init done Apr 14 00:15:03.533600 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 00:15:03.533608 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 00:15:03.533616 kernel: Fallback order for Node 0: 0 Apr 14 00:15:03.533624 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 14 00:15:03.533632 kernel: Policy zone: DMA32 Apr 14 00:15:03.533640 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 00:15:03.533650 kernel: Memory: 2394676K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 172120K reserved, 0K cma-reserved) Apr 14 00:15:03.533661 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 00:15:03.533671 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 00:15:03.533679 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 00:15:03.533689 kernel: Dynamic Preempt: voluntary Apr 14 00:15:03.533698 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 00:15:03.533718 kernel: rcu: RCU event tracing is enabled. Apr 14 00:15:03.533729 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 00:15:03.533739 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 00:15:03.533749 kernel: Rude variant of Tasks RCU enabled. Apr 14 00:15:03.533759 kernel: Tracing variant of Tasks RCU enabled. Apr 14 00:15:03.533769 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 00:15:03.533779 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 00:15:03.533857 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 00:15:03.533866 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 00:15:03.533875 kernel: Console: colour dummy device 80x25 Apr 14 00:15:03.533883 kernel: printk: console [ttyS0] enabled Apr 14 00:15:03.533892 kernel: ACPI: Core revision 20230628 Apr 14 00:15:03.533920 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 00:15:03.533930 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 00:15:03.533940 kernel: x2apic enabled Apr 14 00:15:03.533954 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 00:15:03.533966 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 00:15:03.533976 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 00:15:03.533985 kernel: kvm-guest: setup PV IPIs Apr 14 00:15:03.533995 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 00:15:03.534005 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:15:03.534020 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 00:15:03.534030 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 00:15:03.534040 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 00:15:03.534050 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 00:15:03.534060 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 00:15:03.534069 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 00:15:03.534079 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 00:15:03.534090 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 00:15:03.534100 kernel: RETBleed: Vulnerable Apr 14 00:15:03.534112 kernel: Speculative Store Bypass: Vulnerable Apr 14 00:15:03.534124 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 00:15:03.534137 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 00:15:03.534147 kernel: active return thunk: its_return_thunk Apr 14 00:15:03.534156 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 00:15:03.534166 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 00:15:03.534176 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 00:15:03.534186 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 00:15:03.534195 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 00:15:03.534208 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 00:15:03.534217 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 00:15:03.534227 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 00:15:03.534237 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 00:15:03.534247 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 00:15:03.534256 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 00:15:03.534266 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 00:15:03.534276 kernel: Freeing SMP alternatives memory: 32K Apr 14 00:15:03.534286 kernel: pid_max: default: 32768 minimum: 301 Apr 14 00:15:03.534298 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 00:15:03.534308 kernel: landlock: Up and running. Apr 14 00:15:03.534318 kernel: SELinux: Initializing. Apr 14 00:15:03.534328 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:15:03.534338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:15:03.534348 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 00:15:03.534358 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:15:03.534368 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:15:03.534380 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:15:03.534391 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 00:15:03.534440 kernel: signal: max sigframe size: 3632 Apr 14 00:15:03.534456 kernel: rcu: Hierarchical SRCU implementation. Apr 14 00:15:03.534467 kernel: rcu: Max phase no-delay instances is 400. Apr 14 00:15:03.534477 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 00:15:03.534486 kernel: smp: Bringing up secondary CPUs ... Apr 14 00:15:03.534496 kernel: smpboot: x86: Booting SMP configuration: Apr 14 00:15:03.534504 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 00:15:03.534518 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 00:15:03.534526 kernel: smpboot: Max logical packages: 1 Apr 14 00:15:03.534535 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 00:15:03.534545 kernel: devtmpfs: initialized Apr 14 00:15:03.534555 kernel: x86/mm: Memory block size: 128MB Apr 14 00:15:03.534565 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 14 00:15:03.534575 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 14 00:15:03.534585 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 14 00:15:03.534594 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 14 00:15:03.534607 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 14 00:15:03.534617 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 00:15:03.534626 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 00:15:03.534636 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 00:15:03.534646 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 00:15:03.534656 kernel: audit: initializing netlink subsys (disabled) Apr 14 00:15:03.534666 kernel: audit: type=2000 audit(1776125699.845:1): state=initialized audit_enabled=0 res=1 Apr 14 00:15:03.534675 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 00:15:03.534685 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 00:15:03.534697 kernel: cpuidle: using governor menu Apr 14 00:15:03.534707 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 00:15:03.534717 kernel: dca service started, version 1.12.1 Apr 14 00:15:03.534727 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 00:15:03.534737 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 00:15:03.534747 kernel: PCI: Using configuration type 1 for base access Apr 14 00:15:03.534757 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 00:15:03.534767 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 00:15:03.534776 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 00:15:03.534850 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 00:15:03.534860 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 00:15:03.534870 kernel: ACPI: Added _OSI(Module Device) Apr 14 00:15:03.534880 kernel: ACPI: Added _OSI(Processor Device) Apr 14 00:15:03.534890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 00:15:03.534900 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 00:15:03.534909 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 00:15:03.534919 kernel: ACPI: Interpreter enabled Apr 14 00:15:03.534929 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 00:15:03.534945 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 00:15:03.534956 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 00:15:03.534966 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 00:15:03.534974 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 00:15:03.534983 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 00:15:03.535173 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 00:15:03.535279 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 00:15:03.535367 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 00:15:03.535383 kernel: PCI host bridge to bus 0000:00 Apr 14 00:15:03.535528 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 00:15:03.535612 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 00:15:03.535692 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 00:15:03.535769 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 00:15:03.535872 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 00:15:03.535950 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 14 00:15:03.536028 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 00:15:03.536130 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 00:15:03.536228 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 00:15:03.536316 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 14 00:15:03.536474 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 14 00:15:03.536580 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 14 00:15:03.536665 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 14 00:15:03.536752 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 00:15:03.536904 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 00:15:03.537013 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 14 00:15:03.537381 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 14 00:15:03.537819 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 14 00:15:03.537930 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 00:15:03.538275 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 14 00:15:03.538367 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 14 00:15:03.538497 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 14 00:15:03.538591 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 00:15:03.538672 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 14 00:15:03.538761 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 14 00:15:03.538974 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 14 00:15:03.539063 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 14 00:15:03.539151 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 00:15:03.539238 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 00:15:03.539328 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 00:15:03.539602 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 14 00:15:03.539694 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 14 00:15:03.539909 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 00:15:03.540218 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 14 00:15:03.540232 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 00:15:03.540241 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 00:15:03.540249 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 00:15:03.540258 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 00:15:03.540267 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 00:15:03.540369 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 00:15:03.540387 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 00:15:03.540397 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 00:15:03.540529 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 00:15:03.540540 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 00:15:03.540550 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 00:15:03.540560 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 00:15:03.540570 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 00:15:03.540579 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 00:15:03.540588 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 00:15:03.540599 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 00:15:03.540608 kernel: iommu: Default domain type: Translated Apr 14 00:15:03.540617 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 00:15:03.540626 kernel: efivars: Registered efivars operations Apr 14 00:15:03.540637 kernel: PCI: Using ACPI for IRQ routing Apr 14 00:15:03.540647 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 00:15:03.540657 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 14 00:15:03.540667 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 14 00:15:03.540676 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 14 00:15:03.540691 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 14 00:15:03.540867 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 00:15:03.540961 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 00:15:03.541047 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 00:15:03.541060 kernel: vgaarb: loaded Apr 14 00:15:03.541070 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 00:15:03.541081 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 00:15:03.541090 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 00:15:03.541101 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 00:15:03.541114 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 00:15:03.541125 kernel: pnp: PnP ACPI init Apr 14 00:15:03.541220 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 00:15:03.541235 kernel: pnp: PnP ACPI: found 6 devices Apr 14 00:15:03.541245 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 00:15:03.541255 kernel: NET: Registered PF_INET protocol family Apr 14 00:15:03.541266 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 00:15:03.541276 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 00:15:03.541288 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 00:15:03.541299 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 00:15:03.541648 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 00:15:03.541664 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 00:15:03.541674 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:15:03.541684 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:15:03.541694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 00:15:03.541704 kernel: NET: Registered PF_XDP protocol family Apr 14 00:15:03.541820 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 14 00:15:03.541910 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 14 00:15:03.541996 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 00:15:03.542075 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 00:15:03.542151 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 00:15:03.542227 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 00:15:03.542303 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 00:15:03.542379 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 14 00:15:03.542392 kernel: PCI: CLS 0 bytes, default 64 Apr 14 00:15:03.542447 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 00:15:03.542459 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:15:03.542469 kernel: Initialise system trusted keyrings Apr 14 00:15:03.542479 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 00:15:03.542489 kernel: Key type asymmetric registered Apr 14 00:15:03.542499 kernel: Asymmetric key parser 'x509' registered Apr 14 00:15:03.542509 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 00:15:03.542519 kernel: io scheduler mq-deadline registered Apr 14 00:15:03.542529 kernel: io scheduler kyber registered Apr 14 00:15:03.542542 kernel: io scheduler bfq registered Apr 14 00:15:03.542552 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 00:15:03.542563 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 00:15:03.542572 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 00:15:03.542582 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 00:15:03.542593 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 00:15:03.542603 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 00:15:03.542613 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 00:15:03.542623 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 00:15:03.542635 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 00:15:03.542728 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 00:15:03.542835 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 00:15:03.542849 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 00:15:03.542927 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T00:15:02 UTC (1776125702) Apr 14 00:15:03.543005 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 14 00:15:03.543017 kernel: intel_pstate: CPU model not supported Apr 14 00:15:03.543029 kernel: efifb: probing for efifb Apr 14 00:15:03.543038 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 14 00:15:03.543046 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 14 00:15:03.543055 kernel: efifb: scrolling: redraw Apr 14 00:15:03.543063 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 14 00:15:03.543072 kernel: Console: switching to colour frame buffer device 100x37 Apr 14 00:15:03.543081 kernel: fb0: EFI VGA frame buffer device Apr 14 00:15:03.543107 kernel: pstore: Using crash dump compression: deflate Apr 14 00:15:03.543119 kernel: pstore: Registered efi_pstore as persistent store backend Apr 14 00:15:03.543129 kernel: NET: Registered PF_INET6 protocol family Apr 14 00:15:03.543138 kernel: Segment Routing with IPv6 Apr 14 00:15:03.543147 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 00:15:03.543156 kernel: NET: Registered PF_PACKET protocol family Apr 14 00:15:03.543166 kernel: Key type dns_resolver registered Apr 14 00:15:03.543176 kernel: IPI shorthand broadcast: enabled Apr 14 00:15:03.543186 kernel: sched_clock: Marking stable (1611083494, 593250576)->(2645131874, -440797804) Apr 14 00:15:03.543197 kernel: registered taskstats version 1 Apr 14 00:15:03.543207 kernel: Loading compiled-in X.509 certificates Apr 14 00:15:03.543217 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 00:15:03.543229 kernel: Key type .fscrypt registered Apr 14 00:15:03.543239 kernel: Key type fscrypt-provisioning registered Apr 14 00:15:03.543250 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 00:15:03.543261 kernel: ima: Allocated hash algorithm: sha1 Apr 14 00:15:03.543271 kernel: ima: No architecture policies found Apr 14 00:15:03.543281 kernel: clk: Disabling unused clocks Apr 14 00:15:03.543291 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 00:15:03.543302 kernel: Write protecting the kernel read-only data: 36864k Apr 14 00:15:03.543312 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 00:15:03.543325 kernel: Run /init as init process Apr 14 00:15:03.543334 kernel: with arguments: Apr 14 00:15:03.543343 kernel: /init Apr 14 00:15:03.543352 kernel: with environment: Apr 14 00:15:03.543360 kernel: HOME=/ Apr 14 00:15:03.543369 kernel: TERM=linux Apr 14 00:15:03.543381 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:15:03.543396 systemd[1]: Detected virtualization kvm. Apr 14 00:15:03.543492 systemd[1]: Detected architecture x86-64. Apr 14 00:15:03.543503 systemd[1]: Running in initrd. Apr 14 00:15:03.543514 systemd[1]: No hostname configured, using default hostname. Apr 14 00:15:03.543525 systemd[1]: Hostname set to . Apr 14 00:15:03.543544 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:15:03.543553 systemd[1]: Queued start job for default target initrd.target. Apr 14 00:15:03.543562 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:15:03.543572 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:15:03.543582 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 00:15:03.543591 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:15:03.543601 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 00:15:03.543611 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 00:15:03.543633 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 00:15:03.543644 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 00:15:03.543655 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:15:03.543666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:15:03.543686 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:15:03.543697 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:15:03.543708 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:15:03.543723 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:15:03.543734 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:15:03.543745 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:15:03.543756 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 00:15:03.543770 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 00:15:03.543809 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:15:03.543825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:15:03.543835 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:15:03.543844 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:15:03.543859 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 00:15:03.543869 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:15:03.543880 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 00:15:03.543891 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 00:15:03.543902 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:15:03.543913 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:15:03.543925 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:15:03.543936 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 00:15:03.543947 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:15:03.543996 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 00:15:03.544029 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 00:15:03.544044 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 00:15:03.544056 systemd-journald[194]: Journal started Apr 14 00:15:03.544081 systemd-journald[194]: Runtime Journal (/run/log/journal/6a59ddce994f4a3eba90b7e015519c8b) is 6.0M, max 48.3M, 42.2M free. Apr 14 00:15:03.538278 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 00:15:03.571163 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:15:03.616583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:15:03.652702 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:15:03.658222 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:15:03.659543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:15:03.664708 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:15:03.702541 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 00:15:03.702634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:15:03.706172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:15:03.720079 kernel: Bridge firewalling registered Apr 14 00:15:03.713994 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 00:15:03.714822 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:15:03.724718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:15:03.751676 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 00:15:03.814981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:15:03.832131 dracut-cmdline[226]: dracut-dracut-053 Apr 14 00:15:03.835577 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:15:03.842695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:15:03.867662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:15:03.902980 systemd-resolved[242]: Positive Trust Anchors: Apr 14 00:15:03.903075 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:15:03.903100 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:15:03.905609 systemd-resolved[242]: Defaulting to hostname 'linux'. Apr 14 00:15:03.906737 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:15:03.908283 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:15:04.052155 kernel: SCSI subsystem initialized Apr 14 00:15:04.091774 kernel: Loading iSCSI transport class v2.0-870. Apr 14 00:15:04.120652 kernel: iscsi: registered transport (tcp) Apr 14 00:15:04.152719 kernel: iscsi: registered transport (qla4xxx) Apr 14 00:15:04.153076 kernel: QLogic iSCSI HBA Driver Apr 14 00:15:04.323132 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 00:15:04.333969 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 00:15:04.368930 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 00:15:04.369020 kernel: device-mapper: uevent: version 1.0.3 Apr 14 00:15:04.371156 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 00:15:04.446077 kernel: raid6: avx512x4 gen() 22820 MB/s Apr 14 00:15:04.488021 kernel: raid6: avx512x2 gen() 26361 MB/s Apr 14 00:15:04.506840 kernel: raid6: avx512x1 gen() 34813 MB/s Apr 14 00:15:04.524603 kernel: raid6: avx2x4 gen() 22797 MB/s Apr 14 00:15:04.543017 kernel: raid6: avx2x2 gen() 21202 MB/s Apr 14 00:15:04.562499 kernel: raid6: avx2x1 gen() 11638 MB/s Apr 14 00:15:04.562578 kernel: raid6: using algorithm avx512x1 gen() 34813 MB/s Apr 14 00:15:04.584944 kernel: raid6: .... xor() 15399 MB/s, rmw enabled Apr 14 00:15:04.585025 kernel: raid6: using avx512x2 recovery algorithm Apr 14 00:15:04.624681 kernel: xor: automatically using best checksumming function avx Apr 14 00:15:04.697823 kernel: hrtimer: interrupt took 3321728 ns Apr 14 00:15:05.150258 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 00:15:05.188256 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:15:05.200114 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:15:05.215019 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 14 00:15:05.218286 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:15:05.241017 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 00:15:05.270557 dracut-pre-trigger[432]: rd.md=0: removing MD RAID activation Apr 14 00:15:05.337108 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:15:05.399931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:15:05.461778 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:15:05.472070 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 00:15:05.488719 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 00:15:05.508336 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:15:05.516377 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:15:05.519365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:15:05.540548 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 00:15:05.544713 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 00:15:05.552628 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 00:15:05.609839 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 00:15:05.616867 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:15:05.636299 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 00:15:05.636333 kernel: GPT:9289727 != 19775487 Apr 14 00:15:05.636345 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 00:15:05.636356 kernel: GPT:9289727 != 19775487 Apr 14 00:15:05.636367 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 00:15:05.636378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:15:05.621123 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:15:05.621287 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:15:05.644066 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:15:05.645165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:15:05.645499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:15:05.654262 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:15:05.677618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:15:05.680729 kernel: libata version 3.00 loaded. Apr 14 00:15:05.692031 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 00:15:05.692082 kernel: AES CTR mode by8 optimization enabled Apr 14 00:15:05.691250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:15:05.691329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:15:05.694534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:15:05.724788 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Apr 14 00:15:05.729331 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 00:15:05.748960 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (469) Apr 14 00:15:05.748991 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 00:15:05.749150 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 00:15:05.738781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:15:05.805267 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 00:15:05.808569 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 00:15:05.808768 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 00:15:05.817548 kernel: scsi host0: ahci Apr 14 00:15:05.818064 kernel: scsi host1: ahci Apr 14 00:15:05.823035 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 00:15:05.838536 kernel: scsi host2: ahci Apr 14 00:15:05.838782 kernel: scsi host3: ahci Apr 14 00:15:05.824445 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 00:15:05.851166 kernel: scsi host4: ahci Apr 14 00:15:05.851395 kernel: scsi host5: ahci Apr 14 00:15:05.851572 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 14 00:15:05.830975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:15:05.867645 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 14 00:15:05.867684 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 14 00:15:05.871691 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 14 00:15:05.871748 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 14 00:15:05.874451 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 14 00:15:05.880071 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 00:15:05.881728 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:15:05.894665 disk-uuid[571]: Primary Header is updated. Apr 14 00:15:05.894665 disk-uuid[571]: Secondary Entries is updated. Apr 14 00:15:05.894665 disk-uuid[571]: Secondary Header is updated. Apr 14 00:15:05.908480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:15:05.935296 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:15:06.189666 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 00:15:06.189764 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 00:15:06.194956 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 00:15:06.200925 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 00:15:06.201022 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 00:15:06.201037 kernel: ata3.00: applying bridge limits Apr 14 00:15:06.206525 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 00:15:06.206608 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 00:15:06.211053 kernel: ata3.00: configured for UDMA/100 Apr 14 00:15:06.215583 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 00:15:06.322529 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 00:15:06.322780 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 00:15:06.346250 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 00:15:06.922848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:15:06.929335 disk-uuid[578]: The operation has completed successfully. Apr 14 00:15:06.992939 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 00:15:06.993313 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 00:15:07.019935 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 00:15:07.024515 sh[604]: Success Apr 14 00:15:07.052458 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 00:15:07.136741 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 00:15:07.144679 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 00:15:07.146725 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 00:15:07.195717 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 00:15:07.195794 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:15:07.201177 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 00:15:07.201295 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 00:15:07.203728 kernel: BTRFS info (device dm-0): using free space tree Apr 14 00:15:07.222313 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 00:15:07.223880 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 00:15:07.244989 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 00:15:07.249026 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 00:15:07.276498 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:15:07.276551 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:15:07.276567 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:15:07.284515 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:15:07.306593 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 00:15:07.313791 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:15:07.327889 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 00:15:07.339771 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 00:15:07.450900 ignition[716]: Ignition 2.19.0 Apr 14 00:15:07.450922 ignition[716]: Stage: fetch-offline Apr 14 00:15:07.450953 ignition[716]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:15:07.450959 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:15:07.451023 ignition[716]: parsed url from cmdline: "" Apr 14 00:15:07.451025 ignition[716]: no config URL provided Apr 14 00:15:07.451028 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 00:15:07.465509 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:15:07.451033 ignition[716]: no config at "/usr/lib/ignition/user.ign" Apr 14 00:15:07.451052 ignition[716]: op(1): [started] loading QEMU firmware config module Apr 14 00:15:07.451056 ignition[716]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 00:15:07.487121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:15:07.506146 ignition[716]: op(1): [finished] loading QEMU firmware config module Apr 14 00:15:07.526527 systemd-networkd[792]: lo: Link UP Apr 14 00:15:07.526616 systemd-networkd[792]: lo: Gained carrier Apr 14 00:15:07.528659 systemd-networkd[792]: Enumeration completed Apr 14 00:15:07.529584 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:15:07.533667 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:15:07.533670 systemd-networkd[792]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:15:07.539910 systemd[1]: Reached target network.target - Network. Apr 14 00:15:07.541510 systemd-networkd[792]: eth0: Link UP Apr 14 00:15:07.541514 systemd-networkd[792]: eth0: Gained carrier Apr 14 00:15:07.541528 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:15:07.619613 systemd-networkd[792]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:15:07.645790 ignition[716]: parsing config with SHA512: 875904bb05e0e265608edf75cdf8d34ca26f52c4c82f54d834ff98f159a16f771a31d9762bf6db482119c22c4b769de35ce688cbdedcabb5e8eff2f65610c557 Apr 14 00:15:07.666714 unknown[716]: fetched base config from "system" Apr 14 00:15:07.668230 unknown[716]: fetched user config from "qemu" Apr 14 00:15:07.669106 systemd-resolved[242]: Detected conflict on linux IN A 10.0.0.74 Apr 14 00:15:07.671746 ignition[716]: fetch-offline: fetch-offline passed Apr 14 00:15:07.669118 systemd-resolved[242]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Apr 14 00:15:07.673206 ignition[716]: Ignition finished successfully Apr 14 00:15:07.677134 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:15:07.682071 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 00:15:07.692779 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 00:15:07.721841 ignition[797]: Ignition 2.19.0 Apr 14 00:15:07.722849 ignition[797]: Stage: kargs Apr 14 00:15:07.723208 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:15:07.723220 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:15:07.724453 ignition[797]: kargs: kargs passed Apr 14 00:15:07.730358 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 00:15:07.724501 ignition[797]: Ignition finished successfully Apr 14 00:15:07.747121 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 00:15:07.806876 ignition[805]: Ignition 2.19.0 Apr 14 00:15:07.806899 ignition[805]: Stage: disks Apr 14 00:15:07.807062 ignition[805]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:15:07.807070 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:15:07.807879 ignition[805]: disks: disks passed Apr 14 00:15:07.807922 ignition[805]: Ignition finished successfully Apr 14 00:15:07.817455 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 00:15:07.819565 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 00:15:07.823977 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 00:15:07.825502 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:15:07.843371 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:15:07.844536 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:15:07.872962 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 00:15:07.892721 systemd-fsck[815]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 00:15:07.905381 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 00:15:07.925970 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 00:15:08.286606 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 00:15:08.288720 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 00:15:08.293210 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 00:15:08.309904 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:15:08.318860 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 00:15:08.325124 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (824) Apr 14 00:15:08.328837 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 00:15:08.340384 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:15:08.340491 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:15:08.340503 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:15:08.340516 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:15:08.328915 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 00:15:08.328942 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:15:08.347319 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 00:15:08.359379 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:15:08.365357 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 00:15:08.523229 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 00:15:08.537366 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Apr 14 00:15:08.547366 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 00:15:08.562577 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 00:15:08.831946 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 00:15:08.852860 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 00:15:08.909577 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 00:15:08.918728 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 00:15:08.925219 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:15:08.963014 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 00:15:08.977990 ignition[937]: INFO : Ignition 2.19.0 Apr 14 00:15:08.980605 ignition[937]: INFO : Stage: mount Apr 14 00:15:08.980605 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:15:08.980605 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:15:08.980605 ignition[937]: INFO : mount: mount passed Apr 14 00:15:08.980605 ignition[937]: INFO : Ignition finished successfully Apr 14 00:15:08.999220 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 00:15:09.016894 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 00:15:09.306802 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:15:09.326215 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (950) Apr 14 00:15:09.326282 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:15:09.326295 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:15:09.329027 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:15:09.339521 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:15:09.342105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:15:09.424939 ignition[967]: INFO : Ignition 2.19.0 Apr 14 00:15:09.424939 ignition[967]: INFO : Stage: files Apr 14 00:15:09.424939 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:15:09.424939 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:15:09.439682 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Apr 14 00:15:09.439682 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 00:15:09.439682 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 00:15:09.453723 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 00:15:09.453723 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 00:15:09.453723 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 00:15:09.443698 unknown[967]: wrote ssh authorized keys file for user: core Apr 14 00:15:09.472319 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:15:09.472319 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 00:15:09.542861 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 00:15:09.589918 systemd-networkd[792]: eth0: Gained IPv6LL Apr 14 00:15:09.674293 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:15:09.680391 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 14 00:15:10.376352 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 14 00:15:10.946043 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:15:10.953078 ignition[967]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 14 00:15:11.016220 ignition[967]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:15:11.016220 ignition[967]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:15:11.016220 ignition[967]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 14 00:15:11.016220 ignition[967]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 14 00:15:11.016220 ignition[967]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:15:11.016220 ignition[967]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:15:11.016220 ignition[967]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 14 00:15:11.016220 ignition[967]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 00:15:11.130772 ignition[967]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:15:11.139316 ignition[967]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:15:11.153320 ignition[967]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 00:15:11.153320 ignition[967]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 14 00:15:11.153320 ignition[967]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 00:15:11.153320 ignition[967]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:15:11.153320 ignition[967]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:15:11.153320 ignition[967]: INFO : files: files passed Apr 14 00:15:11.153320 ignition[967]: INFO : Ignition finished successfully Apr 14 00:15:11.144321 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 00:15:11.233087 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 00:15:11.270909 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 00:15:11.274922 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 00:15:11.275085 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 00:15:11.298340 initrd-setup-root-after-ignition[996]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 00:15:11.306111 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:15:11.306111 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:15:11.316866 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:15:11.333391 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:15:11.338164 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 00:15:11.365899 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 00:15:11.443915 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 00:15:11.444209 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 00:15:11.450380 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 00:15:11.508239 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 00:15:11.513048 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 00:15:11.523780 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 00:15:11.575001 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:15:11.604262 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 00:15:11.619246 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:15:11.622679 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:15:11.632939 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 00:15:11.644895 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 00:15:11.646599 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:15:11.709990 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 00:15:11.711528 systemd[1]: Stopped target basic.target - Basic System. Apr 14 00:15:11.714589 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 00:15:11.716103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:15:11.734627 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 00:15:11.738477 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 00:15:11.745627 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:15:11.753658 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 00:15:11.777342 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 00:15:11.781034 systemd[1]: Stopped target swap.target - Swaps. Apr 14 00:15:11.783341 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 00:15:11.783548 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:15:11.798385 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:15:11.801650 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:15:11.802562 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 00:15:11.803000 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:15:11.815653 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 00:15:11.815802 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 00:15:11.826208 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 00:15:11.826555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:15:11.834046 systemd[1]: Stopped target paths.target - Path Units. Apr 14 00:15:11.844352 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 00:15:11.849591 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:15:11.855938 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 00:15:11.888190 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 00:15:11.894201 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 00:15:11.896954 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:15:11.901381 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 00:15:11.901687 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:15:11.907714 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 00:15:11.908229 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:15:11.917551 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 00:15:11.917863 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 00:15:11.937163 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 00:15:11.954620 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 00:15:11.980355 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 00:15:11.980665 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:15:12.000210 ignition[1022]: INFO : Ignition 2.19.0 Apr 14 00:15:12.000210 ignition[1022]: INFO : Stage: umount Apr 14 00:15:12.000210 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:15:12.000210 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:15:11.980870 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 00:15:12.009253 ignition[1022]: INFO : umount: umount passed Apr 14 00:15:12.009253 ignition[1022]: INFO : Ignition finished successfully Apr 14 00:15:11.980963 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:15:12.003148 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 00:15:12.003245 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 00:15:12.007680 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 00:15:12.007810 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 00:15:12.012498 systemd[1]: Stopped target network.target - Network. Apr 14 00:15:12.013178 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 00:15:12.013233 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 00:15:12.014322 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 00:15:12.014369 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 00:15:12.016553 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 00:15:12.016594 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 00:15:12.018080 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 00:15:12.018129 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 00:15:12.019555 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 00:15:12.024384 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 00:15:12.104268 systemd-networkd[792]: eth0: DHCPv6 lease lost Apr 14 00:15:12.104390 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 00:15:12.104710 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 00:15:12.109264 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 00:15:12.109350 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:15:12.131292 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 00:15:12.141367 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 00:15:12.142914 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 00:15:12.155922 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 00:15:12.156025 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:15:12.221950 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 00:15:12.222677 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 00:15:12.222788 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:15:12.234297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 00:15:12.234377 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:15:12.243484 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 00:15:12.243559 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 00:15:12.251749 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:15:12.265198 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 00:15:12.267380 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 00:15:12.273198 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 00:15:12.273258 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 00:15:12.284098 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 00:15:12.284211 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 00:15:12.286131 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 00:15:12.286273 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:15:12.293727 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 00:15:12.293783 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 00:15:12.299183 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 00:15:12.299234 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:15:12.303014 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 00:15:12.303077 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:15:12.315927 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 00:15:12.316049 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 00:15:12.325499 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:15:12.332243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:15:12.409113 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 00:15:12.411794 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 00:15:12.411905 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:15:12.431238 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 14 00:15:12.431319 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:15:12.432549 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 00:15:12.432610 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:15:12.436368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:15:12.436484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:15:12.467540 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 00:15:12.467792 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 00:15:12.469960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 00:15:12.473574 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 00:15:12.502272 systemd[1]: Switching root. Apr 14 00:15:12.540614 systemd-journald[194]: Journal stopped Apr 14 00:15:14.505722 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 00:15:14.505819 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 00:15:14.505869 kernel: SELinux: policy capability open_perms=1 Apr 14 00:15:14.505884 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 00:15:14.505894 kernel: SELinux: policy capability always_check_network=0 Apr 14 00:15:14.505905 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 00:15:14.505924 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 00:15:14.505936 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 00:15:14.505947 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 00:15:14.505958 kernel: audit: type=1403 audit(1776125712.773:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 00:15:14.505982 systemd[1]: Successfully loaded SELinux policy in 61.184ms. Apr 14 00:15:14.506005 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.241ms. Apr 14 00:15:14.506023 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:15:14.506038 systemd[1]: Detected virtualization kvm. Apr 14 00:15:14.506050 systemd[1]: Detected architecture x86-64. Apr 14 00:15:14.506064 systemd[1]: Detected first boot. Apr 14 00:15:14.506076 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:15:14.506088 zram_generator::config[1065]: No configuration found. Apr 14 00:15:14.506100 systemd[1]: Populated /etc with preset unit settings. Apr 14 00:15:14.506112 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 00:15:14.506123 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 00:15:14.506136 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 00:15:14.506151 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 00:15:14.506167 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 00:15:14.506179 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 00:15:14.506192 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 00:15:14.506205 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 00:15:14.506219 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 00:15:14.506232 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 00:15:14.506244 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 00:15:14.506258 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:15:14.506272 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:15:14.506288 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 00:15:14.506302 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 00:15:14.506317 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 00:15:14.506330 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:15:14.506343 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 00:15:14.506355 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:15:14.506367 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 00:15:14.506381 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 00:15:14.506395 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 00:15:14.506539 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 00:15:14.506559 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:15:14.506572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:15:14.506585 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:15:14.506598 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:15:14.506609 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 00:15:14.506621 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 00:15:14.506636 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:15:14.506649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:15:14.506662 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:15:14.506675 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 00:15:14.506689 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 00:15:14.506701 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 00:15:14.506712 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 00:15:14.506725 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:15:14.506737 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 00:15:14.506825 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 00:15:14.507598 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 00:15:14.507628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 00:15:14.507641 systemd[1]: Reached target machines.target - Containers. Apr 14 00:15:14.507654 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 00:15:14.507666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:15:14.507680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:15:14.507705 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 00:15:14.507718 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:15:14.507735 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:15:14.507748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:15:14.507761 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 00:15:14.507773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:15:14.507785 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 00:15:14.507797 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 00:15:14.507810 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 00:15:14.507821 kernel: loop: module loaded Apr 14 00:15:14.507869 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 00:15:14.507882 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 00:15:14.507895 kernel: fuse: init (API version 7.39) Apr 14 00:15:14.507907 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:15:14.507919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:15:14.507961 systemd-journald[1146]: Collecting audit messages is disabled. Apr 14 00:15:14.507990 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 00:15:14.508005 systemd-journald[1146]: Journal started Apr 14 00:15:14.508033 systemd-journald[1146]: Runtime Journal (/run/log/journal/6a59ddce994f4a3eba90b7e015519c8b) is 6.0M, max 48.3M, 42.2M free. Apr 14 00:15:13.828784 systemd[1]: Queued start job for default target multi-user.target. Apr 14 00:15:13.856250 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 00:15:13.862391 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 00:15:13.863234 systemd[1]: systemd-journald.service: Consumed 1.147s CPU time. Apr 14 00:15:14.513529 kernel: ACPI: bus type drm_connector registered Apr 14 00:15:14.519487 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 00:15:14.532310 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:15:14.540559 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 00:15:14.540659 systemd[1]: Stopped verity-setup.service. Apr 14 00:15:14.550678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:15:14.554966 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:15:14.585643 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 00:15:14.592774 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 00:15:14.598903 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 00:15:14.601749 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 00:15:14.604798 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 00:15:14.609084 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 00:15:14.612295 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 00:15:14.616526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:15:14.621132 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 00:15:14.621874 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 00:15:14.627017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:15:14.628964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:15:14.635536 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:15:14.635917 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:15:14.639380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:15:14.640059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:15:14.644175 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 00:15:14.644880 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 00:15:14.648092 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:15:14.648481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:15:14.652857 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:15:14.656371 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 00:15:14.660034 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 00:15:14.663807 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:15:14.678087 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 00:15:14.687772 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 00:15:14.718149 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 00:15:14.722231 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 00:15:14.722302 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:15:14.728069 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 00:15:14.734224 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 00:15:14.740273 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 00:15:14.744142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:15:14.746268 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 00:15:14.753199 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 00:15:14.784389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:15:14.816081 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 00:15:14.822632 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:15:14.824664 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:15:14.830746 systemd-journald[1146]: Time spent on flushing to /var/log/journal/6a59ddce994f4a3eba90b7e015519c8b is 35.742ms for 998 entries. Apr 14 00:15:14.830746 systemd-journald[1146]: System Journal (/var/log/journal/6a59ddce994f4a3eba90b7e015519c8b) is 8.0M, max 195.6M, 187.6M free. Apr 14 00:15:14.889144 systemd-journald[1146]: Received client request to flush runtime journal. Apr 14 00:15:14.834651 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 00:15:14.842773 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 00:15:14.849967 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 00:15:14.861039 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 00:15:14.866959 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 00:15:14.870806 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 00:15:14.876728 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 00:15:14.892882 kernel: loop0: detected capacity change from 0 to 142488 Apr 14 00:15:14.893590 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 00:15:14.899316 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 00:15:14.919881 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 00:15:14.924137 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:15:14.927218 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Apr 14 00:15:14.927357 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Apr 14 00:15:14.938335 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:15:14.949664 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 14 00:15:14.959676 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 00:15:14.967158 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 00:15:14.970005 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 00:15:14.970975 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 00:15:15.013679 kernel: loop1: detected capacity change from 0 to 228704 Apr 14 00:15:15.046882 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 00:15:15.087162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:15:15.103186 kernel: loop2: detected capacity change from 0 to 140768 Apr 14 00:15:15.110214 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Apr 14 00:15:15.110255 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Apr 14 00:15:15.115387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:15:15.169974 kernel: loop3: detected capacity change from 0 to 142488 Apr 14 00:15:15.220959 kernel: loop4: detected capacity change from 0 to 228704 Apr 14 00:15:15.240453 kernel: loop5: detected capacity change from 0 to 140768 Apr 14 00:15:15.328991 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 00:15:15.329546 (sd-merge)[1207]: Merged extensions into '/usr'. Apr 14 00:15:15.340906 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 00:15:15.341082 systemd[1]: Reloading... Apr 14 00:15:15.414545 zram_generator::config[1234]: No configuration found. Apr 14 00:15:15.728069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:15:15.840981 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 00:15:15.842461 systemd[1]: Reloading finished in 500 ms. Apr 14 00:15:15.902173 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 00:15:15.907221 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 00:15:15.954628 systemd[1]: Starting ensure-sysext.service... Apr 14 00:15:16.007328 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:15:16.041225 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Apr 14 00:15:16.041254 systemd[1]: Reloading... Apr 14 00:15:16.089165 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 00:15:16.090114 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 00:15:16.091168 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 00:15:16.091707 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 14 00:15:16.091761 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 14 00:15:16.096982 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:15:16.097154 systemd-tmpfiles[1271]: Skipping /boot Apr 14 00:15:16.127602 zram_generator::config[1297]: No configuration found. Apr 14 00:15:16.129562 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:15:16.129573 systemd-tmpfiles[1271]: Skipping /boot Apr 14 00:15:16.421891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:15:16.538215 systemd[1]: Reloading finished in 496 ms. Apr 14 00:15:16.572600 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 00:15:16.594317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:15:16.648827 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 00:15:16.693213 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 00:15:16.704009 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 00:15:16.725753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:15:16.745165 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:15:16.765701 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 00:15:16.773657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:15:16.773808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:15:16.776308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:15:16.784905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:15:16.791654 augenrules[1358]: No rules Apr 14 00:15:16.801168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:15:16.804948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:15:16.815086 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 00:15:16.819368 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Apr 14 00:15:16.821687 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:15:16.823389 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 00:15:16.826093 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 00:15:16.835122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:15:16.836030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:15:16.840548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:15:16.840961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:15:16.850698 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:15:16.851173 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:15:16.912378 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:15:16.926188 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:15:16.926710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:15:16.936830 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:15:16.950487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:15:16.963066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:15:16.967107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:15:16.971356 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:15:16.978537 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 00:15:16.982665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:15:16.987550 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 00:15:16.994048 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 00:15:16.998908 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 00:15:17.006788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:15:17.007890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:15:17.020261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:15:17.020463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:15:17.027714 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:15:17.036215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:15:17.111747 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 00:15:17.121114 systemd[1]: Finished ensure-sysext.service. Apr 14 00:15:17.131497 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 00:15:17.131646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:15:17.131773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:15:17.139902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:15:17.152382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:15:17.157776 systemd-resolved[1347]: Positive Trust Anchors: Apr 14 00:15:17.157788 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:15:17.157819 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:15:17.166924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:15:17.167997 systemd-resolved[1347]: Defaulting to hostname 'linux'. Apr 14 00:15:17.175119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:15:17.178798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:15:17.192892 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 00:15:17.196635 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 00:15:17.196668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:15:17.197077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:15:17.205208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:15:17.206583 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:15:17.211698 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:15:17.211951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:15:17.216381 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:15:17.220172 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:15:17.252592 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 14 00:15:17.302360 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:15:17.302880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:15:17.320169 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1371) Apr 14 00:15:17.317465 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:15:17.323273 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:15:17.329490 kernel: ACPI: button: Power Button [PWRF] Apr 14 00:15:17.340157 systemd-networkd[1397]: lo: Link UP Apr 14 00:15:17.340167 systemd-networkd[1397]: lo: Gained carrier Apr 14 00:15:17.343573 systemd-networkd[1397]: Enumeration completed Apr 14 00:15:17.352728 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 00:15:17.364957 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:15:17.365108 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:15:17.366240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:15:17.366342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:15:17.366608 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:15:17.366961 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:15:17.366991 systemd-networkd[1397]: eth0: Link UP Apr 14 00:15:17.366994 systemd-networkd[1397]: eth0: Gained carrier Apr 14 00:15:17.367004 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:15:17.376729 systemd[1]: Reached target network.target - Network. Apr 14 00:15:17.394677 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:15:17.394789 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 00:15:17.425500 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 14 00:15:17.431377 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 00:15:17.505574 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 14 00:15:17.506246 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 00:15:17.517577 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 00:15:17.524086 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 00:15:17.554136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:15:17.583720 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 00:15:17.589806 systemd-timesyncd[1419]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 00:15:17.590024 systemd-timesyncd[1419]: Initial clock synchronization to Tue 2026-04-14 00:15:17.281107 UTC. Apr 14 00:15:17.595212 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 00:15:17.625163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:15:17.626801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:15:17.877079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:15:18.025640 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 00:15:18.111676 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:15:18.387083 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 00:15:18.407164 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 00:15:18.423765 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:15:18.476728 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 00:15:18.487166 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:15:18.494261 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:15:18.503768 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 00:15:18.507923 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 00:15:18.511899 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 00:15:18.515679 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 00:15:18.519431 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 00:15:18.523685 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 00:15:18.523748 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:15:18.527944 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:15:18.574964 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 00:15:18.581975 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 00:15:18.594894 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 00:15:18.600626 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 00:15:18.605308 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 00:15:18.610188 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:15:18.613579 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:15:18.616614 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:15:18.616636 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:15:18.617536 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:15:18.617849 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 00:15:18.624007 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 00:15:18.627966 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 00:15:18.652254 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 00:15:18.655546 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 00:15:18.659806 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 00:15:18.666551 jq[1450]: false Apr 14 00:15:18.667493 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 00:15:18.673677 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 00:15:18.681950 extend-filesystems[1451]: Found loop3 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found loop4 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found loop5 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found sr0 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found vda Apr 14 00:15:18.681950 extend-filesystems[1451]: Found vda1 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found vda2 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found vda3 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found usr Apr 14 00:15:18.681950 extend-filesystems[1451]: Found vda4 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found vda6 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found vda7 Apr 14 00:15:18.681950 extend-filesystems[1451]: Found vda9 Apr 14 00:15:18.681950 extend-filesystems[1451]: Checking size of /dev/vda9 Apr 14 00:15:18.770062 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 00:15:18.683655 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 00:15:18.770577 extend-filesystems[1451]: Resized partition /dev/vda9 Apr 14 00:15:18.690955 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 00:15:18.774683 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Apr 14 00:15:18.693961 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 00:15:18.777106 dbus-daemon[1449]: [system] SELinux support is enabled Apr 14 00:15:18.695782 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 00:15:18.697810 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 00:15:18.704503 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 00:15:18.706562 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 00:15:18.791808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1369) Apr 14 00:15:18.712854 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 00:15:18.798809 jq[1466]: true Apr 14 00:15:18.713101 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 00:15:18.722512 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 00:15:18.723067 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 00:15:18.739656 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 00:15:18.748890 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 00:15:18.749449 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 00:15:18.779032 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 00:15:18.802590 systemd-networkd[1397]: eth0: Gained IPv6LL Apr 14 00:15:18.850894 jq[1482]: true Apr 14 00:15:18.853943 update_engine[1462]: I20260414 00:15:18.853595 1462 main.cc:92] Flatcar Update Engine starting Apr 14 00:15:18.856809 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 00:15:18.857006 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 00:15:18.864100 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 00:15:18.864135 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 00:15:18.870091 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 00:15:18.880249 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 00:15:18.881010 tar[1468]: linux-amd64/LICENSE Apr 14 00:15:18.881286 tar[1468]: linux-amd64/helm Apr 14 00:15:18.901052 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 00:15:18.966613 update_engine[1462]: I20260414 00:15:18.904182 1462 update_check_scheduler.cc:74] Next update check in 8m7s Apr 14 00:15:18.907894 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 00:15:18.918888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:15:18.928043 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 00:15:18.973038 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 00:15:18.973038 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 00:15:18.973038 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 00:15:18.963897 systemd[1]: Started update-engine.service - Update Engine. Apr 14 00:15:18.995541 extend-filesystems[1451]: Resized filesystem in /dev/vda9 Apr 14 00:15:19.002131 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Apr 14 00:15:18.981790 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 00:15:18.993815 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 00:15:18.995587 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 00:15:19.001054 systemd-logind[1460]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 00:15:19.001067 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 00:15:19.008870 systemd-logind[1460]: New seat seat0. Apr 14 00:15:19.014467 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 00:15:19.051004 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 00:15:19.095804 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 00:15:19.096689 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 00:15:19.110080 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 00:15:19.110667 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 00:15:19.129986 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 00:15:19.165366 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 00:15:19.334340 containerd[1477]: time="2026-04-14T00:15:19.332792819Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 00:15:19.450282 containerd[1477]: time="2026-04-14T00:15:19.448964443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:15:19.457200 containerd[1477]: time="2026-04-14T00:15:19.456955604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:15:19.457200 containerd[1477]: time="2026-04-14T00:15:19.457194504Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 00:15:19.457452 containerd[1477]: time="2026-04-14T00:15:19.457239257Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 00:15:19.457712 containerd[1477]: time="2026-04-14T00:15:19.457522910Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 00:15:19.457712 containerd[1477]: time="2026-04-14T00:15:19.457548357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 00:15:19.457712 containerd[1477]: time="2026-04-14T00:15:19.457611422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:15:19.457712 containerd[1477]: time="2026-04-14T00:15:19.457623612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:15:19.458704 containerd[1477]: time="2026-04-14T00:15:19.457962427Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:15:19.458704 containerd[1477]: time="2026-04-14T00:15:19.457987867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 00:15:19.458704 containerd[1477]: time="2026-04-14T00:15:19.458006236Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:15:19.458704 containerd[1477]: time="2026-04-14T00:15:19.458016032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 00:15:19.458704 containerd[1477]: time="2026-04-14T00:15:19.458101380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:15:19.458704 containerd[1477]: time="2026-04-14T00:15:19.458299096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:15:19.458704 containerd[1477]: time="2026-04-14T00:15:19.458672054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:15:19.458704 containerd[1477]: time="2026-04-14T00:15:19.458692236Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 00:15:19.459002 containerd[1477]: time="2026-04-14T00:15:19.458817876Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 00:15:19.459002 containerd[1477]: time="2026-04-14T00:15:19.458865621Z" level=info msg="metadata content store policy set" policy=shared Apr 14 00:15:19.469792 containerd[1477]: time="2026-04-14T00:15:19.469629663Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 00:15:19.469954 containerd[1477]: time="2026-04-14T00:15:19.469908275Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 00:15:19.469954 containerd[1477]: time="2026-04-14T00:15:19.469930360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 00:15:19.469954 containerd[1477]: time="2026-04-14T00:15:19.469947847Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 00:15:19.470069 containerd[1477]: time="2026-04-14T00:15:19.469960999Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470138137Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470615323Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470747611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470758489Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470767346Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470778405Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470787219Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470799369Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470811520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470825176Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470834599Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470843978Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470851881Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 00:15:19.471045 containerd[1477]: time="2026-04-14T00:15:19.470869836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470882548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470892980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470901487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470910176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470919437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470928189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470939198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470948845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470959454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470970117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470978720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.470988075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.471003570Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.471017763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.471673 containerd[1477]: time="2026-04-14T00:15:19.471026090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.472082 containerd[1477]: time="2026-04-14T00:15:19.471047903Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 00:15:19.472082 containerd[1477]: time="2026-04-14T00:15:19.471099100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 00:15:19.472082 containerd[1477]: time="2026-04-14T00:15:19.471116218Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 00:15:19.472082 containerd[1477]: time="2026-04-14T00:15:19.471126157Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 00:15:19.472082 containerd[1477]: time="2026-04-14T00:15:19.471135417Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 00:15:19.472082 containerd[1477]: time="2026-04-14T00:15:19.471142106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.472082 containerd[1477]: time="2026-04-14T00:15:19.471152264Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 00:15:19.472082 containerd[1477]: time="2026-04-14T00:15:19.471162347Z" level=info msg="NRI interface is disabled by configuration." Apr 14 00:15:19.472082 containerd[1477]: time="2026-04-14T00:15:19.471170173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 00:15:19.472558 containerd[1477]: time="2026-04-14T00:15:19.471978185Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 00:15:19.472558 containerd[1477]: time="2026-04-14T00:15:19.472032483Z" level=info msg="Connect containerd service" Apr 14 00:15:19.472558 containerd[1477]: time="2026-04-14T00:15:19.472061273Z" level=info msg="using legacy CRI server" Apr 14 00:15:19.472558 containerd[1477]: time="2026-04-14T00:15:19.472066741Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 00:15:19.477609 containerd[1477]: time="2026-04-14T00:15:19.477061530Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 00:15:19.479828 containerd[1477]: time="2026-04-14T00:15:19.478632029Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 00:15:19.479828 containerd[1477]: time="2026-04-14T00:15:19.478939538Z" level=info msg="Start subscribing containerd event" Apr 14 00:15:19.479828 containerd[1477]: time="2026-04-14T00:15:19.479002576Z" level=info msg="Start recovering state" Apr 14 00:15:19.479828 containerd[1477]: time="2026-04-14T00:15:19.479277652Z" level=info msg="Start event monitor" Apr 14 00:15:19.479828 containerd[1477]: time="2026-04-14T00:15:19.479294501Z" level=info msg="Start snapshots syncer" Apr 14 00:15:19.479828 containerd[1477]: time="2026-04-14T00:15:19.479306828Z" level=info msg="Start cni network conf syncer for default" Apr 14 00:15:19.479828 containerd[1477]: time="2026-04-14T00:15:19.479320339Z" level=info msg="Start streaming server" Apr 14 00:15:19.480126 containerd[1477]: time="2026-04-14T00:15:19.479938074Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 00:15:19.481485 containerd[1477]: time="2026-04-14T00:15:19.480144098Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 00:15:19.480490 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 00:15:19.485181 containerd[1477]: time="2026-04-14T00:15:19.485009505Z" level=info msg="containerd successfully booted in 0.155635s" Apr 14 00:15:19.584051 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 00:15:19.633598 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 00:15:19.647119 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 00:15:19.670807 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 00:15:19.673132 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 00:15:19.688136 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 00:15:19.741595 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 00:15:19.759175 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 00:15:19.766780 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 00:15:19.772350 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 00:15:19.961493 tar[1468]: linux-amd64/README.md Apr 14 00:15:19.976689 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 00:15:21.516747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:15:21.517212 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:15:21.521681 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 00:15:21.525880 systemd[1]: Startup finished in 1.970s (kernel) + 9.775s (initrd) + 8.812s (userspace) = 20.558s. Apr 14 00:15:23.256050 kubelet[1563]: E0414 00:15:23.255829 1563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:15:23.264041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:15:23.264605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:15:23.266729 systemd[1]: kubelet.service: Consumed 1.640s CPU time. Apr 14 00:15:28.209768 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 00:15:28.243593 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:46822.service - OpenSSH per-connection server daemon (10.0.0.1:46822). Apr 14 00:15:28.360259 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 46822 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:15:28.373141 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:15:28.396068 systemd-logind[1460]: New session 1 of user core. Apr 14 00:15:28.397874 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 00:15:28.412558 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 00:15:28.461876 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 00:15:28.480828 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 00:15:28.486260 (systemd)[1581]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 00:15:28.731151 systemd[1581]: Queued start job for default target default.target. Apr 14 00:15:28.767796 systemd[1581]: Created slice app.slice - User Application Slice. Apr 14 00:15:28.768033 systemd[1581]: Reached target paths.target - Paths. Apr 14 00:15:28.768052 systemd[1581]: Reached target timers.target - Timers. Apr 14 00:15:28.786279 systemd[1581]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 00:15:28.801729 systemd[1581]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 00:15:28.802097 systemd[1581]: Reached target sockets.target - Sockets. Apr 14 00:15:28.802118 systemd[1581]: Reached target basic.target - Basic System. Apr 14 00:15:28.802162 systemd[1581]: Reached target default.target - Main User Target. Apr 14 00:15:28.802187 systemd[1581]: Startup finished in 302ms. Apr 14 00:15:28.802728 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 00:15:28.859609 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 00:15:28.936782 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:46838.service - OpenSSH per-connection server daemon (10.0.0.1:46838). Apr 14 00:15:29.069528 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 46838 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:15:29.071921 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:15:29.086886 systemd-logind[1460]: New session 2 of user core. Apr 14 00:15:29.098637 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 00:15:29.200211 sshd[1592]: pam_unix(sshd:session): session closed for user core Apr 14 00:15:29.241573 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:46838.service: Deactivated successfully. Apr 14 00:15:29.246154 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 00:15:29.255845 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Apr 14 00:15:29.276194 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:46854.service - OpenSSH per-connection server daemon (10.0.0.1:46854). Apr 14 00:15:29.280061 systemd-logind[1460]: Removed session 2. Apr 14 00:15:29.352996 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 46854 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:15:29.355306 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:15:29.372514 systemd-logind[1460]: New session 3 of user core. Apr 14 00:15:29.387868 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 00:15:29.457076 sshd[1599]: pam_unix(sshd:session): session closed for user core Apr 14 00:15:29.477977 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:46854.service: Deactivated successfully. Apr 14 00:15:29.482063 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 00:15:29.487256 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Apr 14 00:15:29.498794 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:46858.service - OpenSSH per-connection server daemon (10.0.0.1:46858). Apr 14 00:15:29.500529 systemd-logind[1460]: Removed session 3. Apr 14 00:15:29.582967 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 46858 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:15:29.585289 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:15:29.610629 systemd-logind[1460]: New session 4 of user core. Apr 14 00:15:29.625981 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 00:15:29.753951 sshd[1606]: pam_unix(sshd:session): session closed for user core Apr 14 00:15:29.777270 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:46858.service: Deactivated successfully. Apr 14 00:15:29.785615 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 00:15:29.789232 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Apr 14 00:15:29.815844 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:46872.service - OpenSSH per-connection server daemon (10.0.0.1:46872). Apr 14 00:15:29.820042 systemd-logind[1460]: Removed session 4. Apr 14 00:15:29.889117 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 46872 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:15:29.891139 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:15:29.938924 systemd-logind[1460]: New session 5 of user core. Apr 14 00:15:29.958988 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 00:15:30.078809 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 00:15:30.079998 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:15:30.959151 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 00:15:30.959264 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 00:15:31.804464 dockerd[1635]: time="2026-04-14T00:15:31.804202451Z" level=info msg="Starting up" Apr 14 00:15:32.140191 dockerd[1635]: time="2026-04-14T00:15:32.139830969Z" level=info msg="Loading containers: start." Apr 14 00:15:32.674054 kernel: Initializing XFRM netlink socket Apr 14 00:15:33.125760 systemd-networkd[1397]: docker0: Link UP Apr 14 00:15:33.229985 dockerd[1635]: time="2026-04-14T00:15:33.229675440Z" level=info msg="Loading containers: done." Apr 14 00:15:33.322020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 00:15:33.334459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:15:33.336749 dockerd[1635]: time="2026-04-14T00:15:33.336626706Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 00:15:33.336838 dockerd[1635]: time="2026-04-14T00:15:33.336780445Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 00:15:33.336916 dockerd[1635]: time="2026-04-14T00:15:33.336875794Z" level=info msg="Daemon has completed initialization" Apr 14 00:15:33.606235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:15:33.613088 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:15:33.616830 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 00:15:33.629333 dockerd[1635]: time="2026-04-14T00:15:33.617145519Z" level=info msg="API listen on /run/docker.sock" Apr 14 00:15:33.750157 kubelet[1774]: E0414 00:15:33.749897 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:15:33.757820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:15:33.759744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:15:35.541074 containerd[1477]: time="2026-04-14T00:15:35.541024303Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 14 00:15:36.730620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283203847.mount: Deactivated successfully. Apr 14 00:15:41.311120 containerd[1477]: time="2026-04-14T00:15:41.310976600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:41.316899 containerd[1477]: time="2026-04-14T00:15:41.316799368Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 14 00:15:41.322093 containerd[1477]: time="2026-04-14T00:15:41.321572438Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:41.336789 containerd[1477]: time="2026-04-14T00:15:41.333855259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:41.338801 containerd[1477]: time="2026-04-14T00:15:41.337361663Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 5.794778248s" Apr 14 00:15:41.338801 containerd[1477]: time="2026-04-14T00:15:41.337791742Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 14 00:15:41.350046 containerd[1477]: time="2026-04-14T00:15:41.349694088Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 14 00:15:43.856531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 14 00:15:43.866131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:15:44.127740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:15:44.128468 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:15:44.275093 kubelet[1870]: E0414 00:15:44.274101 1870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:15:44.279369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:15:44.279891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:15:44.591773 containerd[1477]: time="2026-04-14T00:15:44.590169693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:44.597948 containerd[1477]: time="2026-04-14T00:15:44.597738658Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 14 00:15:44.599355 containerd[1477]: time="2026-04-14T00:15:44.598871819Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:44.607745 containerd[1477]: time="2026-04-14T00:15:44.607625072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:44.611786 containerd[1477]: time="2026-04-14T00:15:44.611583913Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 3.261827314s" Apr 14 00:15:44.611786 containerd[1477]: time="2026-04-14T00:15:44.611803931Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 14 00:15:44.614128 containerd[1477]: time="2026-04-14T00:15:44.613929981Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 14 00:15:47.104054 containerd[1477]: time="2026-04-14T00:15:47.103782125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:47.106921 containerd[1477]: time="2026-04-14T00:15:47.106767866Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 14 00:15:47.114324 containerd[1477]: time="2026-04-14T00:15:47.113294179Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:47.123767 containerd[1477]: time="2026-04-14T00:15:47.123609634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:47.128850 containerd[1477]: time="2026-04-14T00:15:47.128716695Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 2.514551299s" Apr 14 00:15:47.128850 containerd[1477]: time="2026-04-14T00:15:47.128828625Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 14 00:15:47.130826 containerd[1477]: time="2026-04-14T00:15:47.130371998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 14 00:15:49.296691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283763306.mount: Deactivated successfully. Apr 14 00:15:51.471673 containerd[1477]: time="2026-04-14T00:15:51.471499859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:51.474187 containerd[1477]: time="2026-04-14T00:15:51.471559527Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 14 00:15:51.476120 containerd[1477]: time="2026-04-14T00:15:51.475988587Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:51.483343 containerd[1477]: time="2026-04-14T00:15:51.482982264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:51.485152 containerd[1477]: time="2026-04-14T00:15:51.484845438Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 4.354433529s" Apr 14 00:15:51.485152 containerd[1477]: time="2026-04-14T00:15:51.485151582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 14 00:15:51.487366 containerd[1477]: time="2026-04-14T00:15:51.487314773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 14 00:15:52.440577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount11292279.mount: Deactivated successfully. Apr 14 00:15:54.404786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 14 00:15:54.428479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:15:54.690990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:15:54.700045 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:15:54.919265 kubelet[1951]: E0414 00:15:54.918397 1951 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:15:54.925137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:15:54.926964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:15:55.969950 containerd[1477]: time="2026-04-14T00:15:55.969808718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:55.978760 containerd[1477]: time="2026-04-14T00:15:55.978510828Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 14 00:15:55.998575 containerd[1477]: time="2026-04-14T00:15:55.998274098Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:56.017592 containerd[1477]: time="2026-04-14T00:15:56.016945508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:56.024458 containerd[1477]: time="2026-04-14T00:15:56.023764008Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.536395969s" Apr 14 00:15:56.024458 containerd[1477]: time="2026-04-14T00:15:56.023829672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 14 00:15:56.030246 containerd[1477]: time="2026-04-14T00:15:56.028872505Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 14 00:15:57.322566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount388084139.mount: Deactivated successfully. Apr 14 00:15:57.473123 containerd[1477]: time="2026-04-14T00:15:57.469653829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:57.477293 containerd[1477]: time="2026-04-14T00:15:57.477053736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 14 00:15:57.485508 containerd[1477]: time="2026-04-14T00:15:57.484994795Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:57.507873 containerd[1477]: time="2026-04-14T00:15:57.505817461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:15:57.518491 containerd[1477]: time="2026-04-14T00:15:57.518319975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.489392114s" Apr 14 00:15:57.518491 containerd[1477]: time="2026-04-14T00:15:57.518396943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 14 00:15:57.520514 containerd[1477]: time="2026-04-14T00:15:57.519046450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 14 00:15:58.393769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117137841.mount: Deactivated successfully. Apr 14 00:16:03.278033 containerd[1477]: time="2026-04-14T00:16:03.275814021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:16:03.281795 containerd[1477]: time="2026-04-14T00:16:03.281659370Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 14 00:16:03.288605 containerd[1477]: time="2026-04-14T00:16:03.286870898Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:16:03.306808 containerd[1477]: time="2026-04-14T00:16:03.305634240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:16:03.311084 containerd[1477]: time="2026-04-14T00:16:03.310840484Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 5.791755867s" Apr 14 00:16:03.311084 containerd[1477]: time="2026-04-14T00:16:03.310938452Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 14 00:16:04.000854 update_engine[1462]: I20260414 00:16:03.999500 1462 update_attempter.cc:509] Updating boot flags... Apr 14 00:16:04.098563 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2056) Apr 14 00:16:04.215817 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2059) Apr 14 00:16:05.105098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 14 00:16:05.147056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:16:05.510827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:16:05.520949 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:16:05.719585 kubelet[2073]: E0414 00:16:05.719152 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:16:05.724328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:16:05.724840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:16:08.651329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:16:08.695286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:16:08.745897 systemd[1]: Reloading requested from client PID 2088 ('systemctl') (unit session-5.scope)... Apr 14 00:16:08.746069 systemd[1]: Reloading... Apr 14 00:16:08.930666 zram_generator::config[2130]: No configuration found. Apr 14 00:16:09.237906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:16:09.393339 systemd[1]: Reloading finished in 646 ms. Apr 14 00:16:09.558369 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 14 00:16:09.558639 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 14 00:16:09.558993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:16:09.564163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:16:09.914030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:16:09.935197 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:16:10.119267 kubelet[2176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:16:10.119267 kubelet[2176]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:16:10.119267 kubelet[2176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:16:10.120214 kubelet[2176]: I0414 00:16:10.119380 2176 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:16:10.983657 kubelet[2176]: I0414 00:16:10.980311 2176 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 00:16:10.983657 kubelet[2176]: I0414 00:16:10.980593 2176 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:16:10.999250 kubelet[2176]: I0414 00:16:10.991344 2176 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:16:11.127282 kubelet[2176]: E0414 00:16:11.127220 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:16:11.133980 kubelet[2176]: I0414 00:16:11.132963 2176 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:16:11.180639 kubelet[2176]: E0414 00:16:11.179659 2176 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:16:11.181635 kubelet[2176]: I0414 00:16:11.181036 2176 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 00:16:11.244588 kubelet[2176]: I0414 00:16:11.244385 2176 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 00:16:11.246544 kubelet[2176]: I0414 00:16:11.246442 2176 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:16:11.247115 kubelet[2176]: I0414 00:16:11.246552 2176 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 00:16:11.247115 kubelet[2176]: I0414 00:16:11.247120 2176 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:16:11.247532 kubelet[2176]: I0414 00:16:11.247137 2176 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 00:16:11.247532 kubelet[2176]: I0414 00:16:11.247489 2176 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:16:11.260226 kubelet[2176]: I0414 00:16:11.259992 2176 kubelet.go:480] "Attempting to sync node with API server" Apr 14 00:16:11.260461 kubelet[2176]: I0414 00:16:11.260262 2176 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:16:11.260461 kubelet[2176]: I0414 00:16:11.260371 2176 kubelet.go:386] "Adding apiserver pod source" Apr 14 00:16:11.260528 kubelet[2176]: I0414 00:16:11.260475 2176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:16:11.276178 kubelet[2176]: E0414 00:16:11.275652 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:16:11.281578 kubelet[2176]: I0414 00:16:11.279187 2176 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:16:11.281578 kubelet[2176]: I0414 00:16:11.280307 2176 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:16:11.281578 kubelet[2176]: E0414 00:16:11.281339 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:16:11.294118 kubelet[2176]: W0414 00:16:11.293864 2176 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 00:16:11.314783 kubelet[2176]: I0414 00:16:11.313973 2176 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 00:16:11.314783 kubelet[2176]: I0414 00:16:11.314290 2176 server.go:1289] "Started kubelet" Apr 14 00:16:11.320101 kubelet[2176]: I0414 00:16:11.315352 2176 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:16:11.325864 kubelet[2176]: I0414 00:16:11.324332 2176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:16:11.325864 kubelet[2176]: I0414 00:16:11.325242 2176 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:16:11.327149 kubelet[2176]: I0414 00:16:11.327126 2176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:16:11.335794 kubelet[2176]: I0414 00:16:11.318868 2176 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:16:11.335794 kubelet[2176]: E0414 00:16:11.331096 2176 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:16:11.335794 kubelet[2176]: I0414 00:16:11.331283 2176 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:16:11.335794 kubelet[2176]: E0414 00:16:11.329156 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a610fa8b987490 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,LastTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:16:11.335794 kubelet[2176]: E0414 00:16:11.334984 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:16:11.335794 kubelet[2176]: I0414 00:16:11.335074 2176 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 00:16:11.340996 kubelet[2176]: E0414 00:16:11.340659 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Apr 14 00:16:11.340996 kubelet[2176]: I0414 00:16:11.340786 2176 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 00:16:11.350373 kubelet[2176]: E0414 00:16:11.348869 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:16:11.350373 kubelet[2176]: I0414 00:16:11.349503 2176 reconciler.go:26] "Reconciler: start to sync state" Apr 14 00:16:11.355836 kubelet[2176]: I0414 00:16:11.355288 2176 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:16:11.358996 kubelet[2176]: I0414 00:16:11.357551 2176 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:16:11.369197 kubelet[2176]: I0414 00:16:11.369114 2176 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:16:11.445453 kubelet[2176]: E0414 00:16:11.436854 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:16:11.505210 kubelet[2176]: I0414 00:16:11.505043 2176 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:16:11.505210 kubelet[2176]: I0414 00:16:11.505081 2176 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:16:11.505210 kubelet[2176]: I0414 00:16:11.505101 2176 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:16:11.521347 kubelet[2176]: I0414 00:16:11.519800 2176 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 00:16:11.525861 kubelet[2176]: I0414 00:16:11.525468 2176 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 00:16:11.529549 kubelet[2176]: I0414 00:16:11.528470 2176 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 00:16:11.529549 kubelet[2176]: E0414 00:16:11.529085 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:16:11.534240 kubelet[2176]: I0414 00:16:11.529160 2176 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:16:11.534240 kubelet[2176]: I0414 00:16:11.531553 2176 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 00:16:11.534240 kubelet[2176]: E0414 00:16:11.531622 2176 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:16:11.537599 kubelet[2176]: E0414 00:16:11.537297 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:16:11.543435 kubelet[2176]: E0414 00:16:11.543350 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Apr 14 00:16:11.609119 kubelet[2176]: I0414 00:16:11.608290 2176 policy_none.go:49] "None policy: Start" Apr 14 00:16:11.611177 kubelet[2176]: I0414 00:16:11.610165 2176 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 00:16:11.611177 kubelet[2176]: I0414 00:16:11.610308 2176 state_mem.go:35] "Initializing new in-memory state store" Apr 14 00:16:11.632693 kubelet[2176]: E0414 00:16:11.632633 2176 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:16:11.638513 kubelet[2176]: E0414 00:16:11.638289 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:16:11.738655 kubelet[2176]: E0414 00:16:11.738586 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:16:11.740355 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 00:16:11.798275 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 00:16:11.835691 kubelet[2176]: E0414 00:16:11.832921 2176 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:16:11.834033 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 00:16:11.843734 kubelet[2176]: E0414 00:16:11.843615 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:16:11.864139 kubelet[2176]: E0414 00:16:11.864077 2176 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:16:11.864387 kubelet[2176]: I0414 00:16:11.864333 2176 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:16:11.864465 kubelet[2176]: I0414 00:16:11.864382 2176 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:16:11.866301 kubelet[2176]: I0414 00:16:11.865806 2176 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:16:11.875851 kubelet[2176]: E0414 00:16:11.875120 2176 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:16:11.875851 kubelet[2176]: E0414 00:16:11.875192 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:16:11.948178 kubelet[2176]: E0414 00:16:11.947686 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Apr 14 00:16:11.969156 kubelet[2176]: I0414 00:16:11.968804 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:16:11.980617 kubelet[2176]: E0414 00:16:11.979865 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Apr 14 00:16:12.191132 kubelet[2176]: I0414 00:16:12.189185 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:16:12.192698 kubelet[2176]: E0414 00:16:12.192621 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Apr 14 00:16:12.282177 systemd[1]: Created slice kubepods-burstable-pod910620ab97bd565e57355b3584f4fd7d.slice - libcontainer container kubepods-burstable-pod910620ab97bd565e57355b3584f4fd7d.slice. Apr 14 00:16:12.293588 kubelet[2176]: I0414 00:16:12.291495 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/910620ab97bd565e57355b3584f4fd7d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"910620ab97bd565e57355b3584f4fd7d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:16:12.293588 kubelet[2176]: I0414 00:16:12.291545 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/910620ab97bd565e57355b3584f4fd7d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"910620ab97bd565e57355b3584f4fd7d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:16:12.293588 kubelet[2176]: I0414 00:16:12.292780 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/910620ab97bd565e57355b3584f4fd7d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"910620ab97bd565e57355b3584f4fd7d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:16:12.311815 kubelet[2176]: E0414 00:16:12.304400 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:12.385053 systemd[1]: Created slice kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice - libcontainer container kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice. Apr 14 00:16:12.397125 kubelet[2176]: I0414 00:16:12.394707 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:16:12.397125 kubelet[2176]: I0414 00:16:12.394823 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:16:12.397125 kubelet[2176]: I0414 00:16:12.394860 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:16:12.397125 kubelet[2176]: I0414 00:16:12.394883 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:16:12.397125 kubelet[2176]: I0414 00:16:12.394902 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:16:12.397841 kubelet[2176]: I0414 00:16:12.394923 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:16:12.404587 kubelet[2176]: E0414 00:16:12.404499 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:12.552761 systemd[1]: Created slice kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice - libcontainer container kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice. Apr 14 00:16:12.578974 kubelet[2176]: E0414 00:16:12.578007 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:12.590366 kubelet[2176]: E0414 00:16:12.587583 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:12.592193 containerd[1477]: time="2026-04-14T00:16:12.588896477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 14 00:16:12.598246 kubelet[2176]: I0414 00:16:12.597596 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:16:12.616158 kubelet[2176]: E0414 00:16:12.601307 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Apr 14 00:16:12.616158 kubelet[2176]: E0414 00:16:12.615039 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:12.626887 containerd[1477]: time="2026-04-14T00:16:12.624803649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:910620ab97bd565e57355b3584f4fd7d,Namespace:kube-system,Attempt:0,}" Apr 14 00:16:12.695077 kubelet[2176]: E0414 00:16:12.692303 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:16:12.709254 kubelet[2176]: E0414 00:16:12.707260 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:12.711086 containerd[1477]: time="2026-04-14T00:16:12.710815707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 14 00:16:12.810267 kubelet[2176]: E0414 00:16:12.806783 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="1.6s" Apr 14 00:16:12.811633 kubelet[2176]: E0414 00:16:12.811243 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:16:12.886899 kubelet[2176]: E0414 00:16:12.884018 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:16:12.924078 kubelet[2176]: E0414 00:16:12.923689 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:16:12.946508 kubelet[2176]: E0414 00:16:12.946162 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a610fa8b987490 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,LastTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:16:13.177345 kubelet[2176]: E0414 00:16:13.174951 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:16:13.436486 kubelet[2176]: I0414 00:16:13.434937 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:16:13.436486 kubelet[2176]: E0414 00:16:13.436108 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Apr 14 00:16:13.854972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029204398.mount: Deactivated successfully. Apr 14 00:16:13.957195 containerd[1477]: time="2026-04-14T00:16:13.952385628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:16:13.997769 containerd[1477]: time="2026-04-14T00:16:13.997262411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:16:14.039050 containerd[1477]: time="2026-04-14T00:16:14.037864722Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:16:14.090554 containerd[1477]: time="2026-04-14T00:16:14.088654916Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:16:14.108655 containerd[1477]: time="2026-04-14T00:16:14.107367434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 00:16:14.140088 containerd[1477]: time="2026-04-14T00:16:14.139637843Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:16:14.152627 containerd[1477]: time="2026-04-14T00:16:14.152540373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:16:14.167549 containerd[1477]: time="2026-04-14T00:16:14.167012169Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.539761426s" Apr 14 00:16:14.180015 containerd[1477]: time="2026-04-14T00:16:14.174935655Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:16:14.230367 containerd[1477]: time="2026-04-14T00:16:14.225263450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.5143101s" Apr 14 00:16:14.230367 containerd[1477]: time="2026-04-14T00:16:14.228005083Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.635871294s" Apr 14 00:16:14.409266 kubelet[2176]: E0414 00:16:14.408938 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="3.2s" Apr 14 00:16:15.000042 kubelet[2176]: E0414 00:16:14.997329 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:16:15.077017 kubelet[2176]: I0414 00:16:15.074061 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:16:15.077864 kubelet[2176]: E0414 00:16:15.077533 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Apr 14 00:16:15.372538 kubelet[2176]: E0414 00:16:15.371825 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:16:15.693663 kubelet[2176]: E0414 00:16:15.692759 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:16:16.093713 kubelet[2176]: E0414 00:16:16.093539 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:16:16.313579 containerd[1477]: time="2026-04-14T00:16:16.312437279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:16:16.314930 containerd[1477]: time="2026-04-14T00:16:16.314778255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:16:16.315099 containerd[1477]: time="2026-04-14T00:16:16.315043983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:16:16.318084 containerd[1477]: time="2026-04-14T00:16:16.317392995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:16:16.323447 containerd[1477]: time="2026-04-14T00:16:16.321933532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:16:16.323447 containerd[1477]: time="2026-04-14T00:16:16.321988902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:16:16.326771 containerd[1477]: time="2026-04-14T00:16:16.326154813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:16:16.326771 containerd[1477]: time="2026-04-14T00:16:16.326270794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:16:16.335819 containerd[1477]: time="2026-04-14T00:16:16.332061006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:16:16.335819 containerd[1477]: time="2026-04-14T00:16:16.332991046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:16:16.335819 containerd[1477]: time="2026-04-14T00:16:16.333030708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:16:16.346294 containerd[1477]: time="2026-04-14T00:16:16.345925790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:16:17.103185 systemd[1]: Started cri-containerd-6f07b4989dfe4028bfbf1de91b85f5dd7a12669b8a496d3226fa1050756d4932.scope - libcontainer container 6f07b4989dfe4028bfbf1de91b85f5dd7a12669b8a496d3226fa1050756d4932. Apr 14 00:16:17.120624 systemd[1]: Started cri-containerd-19dea8460db2dc4b40229134ba5177c003d62dff764dda702c66cf898c04f0cf.scope - libcontainer container 19dea8460db2dc4b40229134ba5177c003d62dff764dda702c66cf898c04f0cf. Apr 14 00:16:17.277730 systemd[1]: Started cri-containerd-97e0025fbbab0ae997df1e5c53ed497e87744739af45d04c622a639e5206e0a7.scope - libcontainer container 97e0025fbbab0ae997df1e5c53ed497e87744739af45d04c622a639e5206e0a7. Apr 14 00:16:17.443129 kubelet[2176]: E0414 00:16:17.441114 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:16:17.624684 kubelet[2176]: E0414 00:16:17.624590 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="6.4s" Apr 14 00:16:17.646941 containerd[1477]: time="2026-04-14T00:16:17.646611377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f07b4989dfe4028bfbf1de91b85f5dd7a12669b8a496d3226fa1050756d4932\"" Apr 14 00:16:17.648834 kubelet[2176]: E0414 00:16:17.648763 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:17.673875 containerd[1477]: time="2026-04-14T00:16:17.673314924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:910620ab97bd565e57355b3584f4fd7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"97e0025fbbab0ae997df1e5c53ed497e87744739af45d04c622a639e5206e0a7\"" Apr 14 00:16:17.684439 containerd[1477]: time="2026-04-14T00:16:17.682359394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"19dea8460db2dc4b40229134ba5177c003d62dff764dda702c66cf898c04f0cf\"" Apr 14 00:16:17.686054 kubelet[2176]: E0414 00:16:17.683166 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:17.686660 containerd[1477]: time="2026-04-14T00:16:17.685743629Z" level=info msg="CreateContainer within sandbox \"6f07b4989dfe4028bfbf1de91b85f5dd7a12669b8a496d3226fa1050756d4932\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 00:16:17.699338 kubelet[2176]: E0414 00:16:17.699054 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:17.710157 containerd[1477]: time="2026-04-14T00:16:17.709947677Z" level=info msg="CreateContainer within sandbox \"97e0025fbbab0ae997df1e5c53ed497e87744739af45d04c622a639e5206e0a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 00:16:17.719723 containerd[1477]: time="2026-04-14T00:16:17.718132394Z" level=info msg="CreateContainer within sandbox \"19dea8460db2dc4b40229134ba5177c003d62dff764dda702c66cf898c04f0cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 00:16:17.802787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101291102.mount: Deactivated successfully. Apr 14 00:16:17.878674 containerd[1477]: time="2026-04-14T00:16:17.878290952Z" level=info msg="CreateContainer within sandbox \"6f07b4989dfe4028bfbf1de91b85f5dd7a12669b8a496d3226fa1050756d4932\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e6825335a46363138f2791dd61b7c34dd7bb29a97f16ad5c1322537d4e05e7c\"" Apr 14 00:16:17.882609 containerd[1477]: time="2026-04-14T00:16:17.880292715Z" level=info msg="StartContainer for \"2e6825335a46363138f2791dd61b7c34dd7bb29a97f16ad5c1322537d4e05e7c\"" Apr 14 00:16:17.910517 containerd[1477]: time="2026-04-14T00:16:17.909625493Z" level=info msg="CreateContainer within sandbox \"97e0025fbbab0ae997df1e5c53ed497e87744739af45d04c622a639e5206e0a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd1d5da57cdb0bb7bac0fdbb9c48e5e619426be79269fc9e4443c5abde057122\"" Apr 14 00:16:17.914266 containerd[1477]: time="2026-04-14T00:16:17.912019288Z" level=info msg="StartContainer for \"dd1d5da57cdb0bb7bac0fdbb9c48e5e619426be79269fc9e4443c5abde057122\"" Apr 14 00:16:17.926510 containerd[1477]: time="2026-04-14T00:16:17.925594589Z" level=info msg="CreateContainer within sandbox \"19dea8460db2dc4b40229134ba5177c003d62dff764dda702c66cf898c04f0cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869\"" Apr 14 00:16:17.929956 containerd[1477]: time="2026-04-14T00:16:17.929900626Z" level=info msg="StartContainer for \"5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869\"" Apr 14 00:16:18.030584 systemd[1]: Started cri-containerd-dd1d5da57cdb0bb7bac0fdbb9c48e5e619426be79269fc9e4443c5abde057122.scope - libcontainer container dd1d5da57cdb0bb7bac0fdbb9c48e5e619426be79269fc9e4443c5abde057122. Apr 14 00:16:18.060091 systemd[1]: Started cri-containerd-2e6825335a46363138f2791dd61b7c34dd7bb29a97f16ad5c1322537d4e05e7c.scope - libcontainer container 2e6825335a46363138f2791dd61b7c34dd7bb29a97f16ad5c1322537d4e05e7c. Apr 14 00:16:18.130113 systemd[1]: Started cri-containerd-5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869.scope - libcontainer container 5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869. Apr 14 00:16:18.281315 kubelet[2176]: I0414 00:16:18.280746 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:16:18.281315 kubelet[2176]: E0414 00:16:18.281139 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Apr 14 00:16:18.344373 containerd[1477]: time="2026-04-14T00:16:18.338622900Z" level=info msg="StartContainer for \"2e6825335a46363138f2791dd61b7c34dd7bb29a97f16ad5c1322537d4e05e7c\" returns successfully" Apr 14 00:16:18.361441 containerd[1477]: time="2026-04-14T00:16:18.349956044Z" level=info msg="StartContainer for \"dd1d5da57cdb0bb7bac0fdbb9c48e5e619426be79269fc9e4443c5abde057122\" returns successfully" Apr 14 00:16:18.510640 containerd[1477]: time="2026-04-14T00:16:18.510035809Z" level=info msg="StartContainer for \"5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869\" returns successfully" Apr 14 00:16:18.691372 kubelet[2176]: E0414 00:16:18.691332 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:18.692211 kubelet[2176]: E0414 00:16:18.691555 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:18.729461 kubelet[2176]: E0414 00:16:18.729377 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:18.731073 kubelet[2176]: E0414 00:16:18.731040 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:18.735301 kubelet[2176]: E0414 00:16:18.735264 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:18.735683 kubelet[2176]: E0414 00:16:18.735669 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:19.732148 kubelet[2176]: E0414 00:16:19.731946 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:19.734319 kubelet[2176]: E0414 00:16:19.734112 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:19.737233 kubelet[2176]: E0414 00:16:19.736681 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:19.737233 kubelet[2176]: E0414 00:16:19.736865 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:20.685601 kubelet[2176]: E0414 00:16:20.684895 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:20.687290 kubelet[2176]: E0414 00:16:20.687156 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:21.882213 kubelet[2176]: E0414 00:16:21.882093 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:16:22.629315 kubelet[2176]: E0414 00:16:22.629119 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:22.630052 kubelet[2176]: E0414 00:16:22.629884 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:24.545293 kubelet[2176]: E0414 00:16:24.545198 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:24.581856 kubelet[2176]: E0414 00:16:24.581704 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:24.702232 kubelet[2176]: I0414 00:16:24.702041 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:16:29.323314 kubelet[2176]: E0414 00:16:29.323171 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:16:30.500038 kubelet[2176]: E0414 00:16:30.499321 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:16:30.696391 kubelet[2176]: E0414 00:16:30.695872 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:30.696391 kubelet[2176]: E0414 00:16:30.696204 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:31.313611 kubelet[2176]: E0414 00:16:31.313514 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:16:31.579684 kubelet[2176]: E0414 00:16:31.579327 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:16:31.883354 kubelet[2176]: E0414 00:16:31.883274 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:16:32.949173 kubelet[2176]: E0414 00:16:32.948663 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a610fa8b987490 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,LastTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:16:34.030063 kubelet[2176]: E0414 00:16:34.029841 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 00:16:34.712109 kubelet[2176]: E0414 00:16:34.711375 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:16:35.820288 kubelet[2176]: E0414 00:16:35.819078 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:16:41.729935 kubelet[2176]: I0414 00:16:41.729695 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:16:41.884786 kubelet[2176]: E0414 00:16:41.884565 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:16:44.035096 kubelet[2176]: E0414 00:16:44.034812 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:16:44.038288 kubelet[2176]: E0414 00:16:44.038171 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:16:45.965971 kubelet[2176]: E0414 00:16:45.948759 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:16:49.413093 kubelet[2176]: E0414 00:16:49.413030 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:16:49.805841 kubelet[2176]: E0414 00:16:49.804955 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:16:51.032762 kubelet[2176]: E0414 00:16:51.031920 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 00:16:51.741214 kubelet[2176]: E0414 00:16:51.740107 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:16:51.885606 kubelet[2176]: E0414 00:16:51.885488 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:16:52.963954 kubelet[2176]: E0414 00:16:52.962906 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a610fa8b987490 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,LastTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:16:53.803244 kubelet[2176]: E0414 00:16:53.803157 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:16:58.787726 kubelet[2176]: I0414 00:16:58.787667 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:17:01.887698 kubelet[2176]: E0414 00:17:01.886965 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:17:02.005639 kubelet[2176]: E0414 00:17:02.005266 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:17:02.009050 kubelet[2176]: E0414 00:17:02.007391 2176 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:17:03.837019 kubelet[2176]: E0414 00:17:03.835270 2176 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 00:17:03.998588 kubelet[2176]: E0414 00:17:03.997980 2176 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a610fa8b987490 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,LastTimestamp:2026-04-14 00:16:11.314132112 +0000 UTC m=+1.372016986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:17:04.095800 kubelet[2176]: I0414 00:17:04.095520 2176 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:17:04.095800 kubelet[2176]: E0414 00:17:04.095608 2176 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 00:17:04.281121 kubelet[2176]: E0414 00:17:04.279198 2176 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a610fa8c9a6b9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:16:11.331038107 +0000 UTC m=+1.388922980,LastTimestamp:2026-04-14 00:16:11.331038107 +0000 UTC m=+1.388922980,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:17:04.321604 kubelet[2176]: E0414 00:17:04.321320 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:04.375903 kubelet[2176]: E0414 00:17:04.375073 2176 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a610fa966460f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:16:11.495268594 +0000 UTC m=+1.553153474,LastTimestamp:2026-04-14 00:16:11.495268594 +0000 UTC m=+1.553153474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:17:04.422777 kubelet[2176]: E0414 00:17:04.422694 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:04.524967 kubelet[2176]: E0414 00:17:04.524836 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:04.634505 kubelet[2176]: E0414 00:17:04.625882 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:04.728844 kubelet[2176]: E0414 00:17:04.728675 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:04.830473 kubelet[2176]: E0414 00:17:04.830117 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:04.936609 kubelet[2176]: E0414 00:17:04.931122 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:05.032388 kubelet[2176]: E0414 00:17:05.032313 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:05.133882 kubelet[2176]: E0414 00:17:05.133775 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:05.238307 kubelet[2176]: E0414 00:17:05.237895 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:05.341463 kubelet[2176]: E0414 00:17:05.338899 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:05.447594 kubelet[2176]: E0414 00:17:05.447290 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:05.548724 kubelet[2176]: E0414 00:17:05.548552 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:05.704211 kubelet[2176]: E0414 00:17:05.703926 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:05.805747 kubelet[2176]: E0414 00:17:05.805034 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:05.906167 kubelet[2176]: E0414 00:17:05.906033 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.007214 kubelet[2176]: E0414 00:17:06.007060 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.108451 kubelet[2176]: E0414 00:17:06.108207 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.209760 kubelet[2176]: E0414 00:17:06.209564 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.313230 kubelet[2176]: E0414 00:17:06.312808 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.415560 kubelet[2176]: E0414 00:17:06.414577 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.518786 kubelet[2176]: E0414 00:17:06.518688 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.621154 kubelet[2176]: E0414 00:17:06.620783 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.723325 kubelet[2176]: E0414 00:17:06.721764 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.823363 kubelet[2176]: E0414 00:17:06.823166 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:06.930898 kubelet[2176]: E0414 00:17:06.930825 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.032482 kubelet[2176]: E0414 00:17:07.031061 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.132827 kubelet[2176]: E0414 00:17:07.132735 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.233496 kubelet[2176]: E0414 00:17:07.233339 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.335190 kubelet[2176]: E0414 00:17:07.334914 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.436780 kubelet[2176]: E0414 00:17:07.436607 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.537235 kubelet[2176]: E0414 00:17:07.537102 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.637790 kubelet[2176]: E0414 00:17:07.637692 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.738959 kubelet[2176]: E0414 00:17:07.738828 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.839686 kubelet[2176]: E0414 00:17:07.839576 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:07.941756 kubelet[2176]: E0414 00:17:07.941552 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:08.042739 kubelet[2176]: E0414 00:17:08.042084 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:08.143944 kubelet[2176]: E0414 00:17:08.143824 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:08.244932 kubelet[2176]: E0414 00:17:08.244685 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:08.346308 kubelet[2176]: E0414 00:17:08.345510 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:08.446685 kubelet[2176]: E0414 00:17:08.446166 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:08.548511 kubelet[2176]: E0414 00:17:08.548199 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:08.698790 kubelet[2176]: E0414 00:17:08.698674 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:08.807498 kubelet[2176]: E0414 00:17:08.801765 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:08.928638 kubelet[2176]: E0414 00:17:08.922087 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.028332 kubelet[2176]: E0414 00:17:09.023794 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.125000 kubelet[2176]: E0414 00:17:09.124883 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.226794 kubelet[2176]: E0414 00:17:09.226698 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.334797 kubelet[2176]: E0414 00:17:09.330037 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.440607 kubelet[2176]: E0414 00:17:09.439929 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.543727 kubelet[2176]: E0414 00:17:09.542522 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.646064 kubelet[2176]: E0414 00:17:09.645869 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.759697 kubelet[2176]: E0414 00:17:09.747825 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.852457 kubelet[2176]: E0414 00:17:09.852177 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:09.997336 kubelet[2176]: E0414 00:17:09.997285 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:10.098834 kubelet[2176]: E0414 00:17:10.098501 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:10.200323 kubelet[2176]: E0414 00:17:10.199833 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:10.308584 kubelet[2176]: E0414 00:17:10.300611 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:10.412675 kubelet[2176]: E0414 00:17:10.412568 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:10.514143 kubelet[2176]: E0414 00:17:10.514020 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:10.616017 kubelet[2176]: E0414 00:17:10.614877 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:10.716220 kubelet[2176]: E0414 00:17:10.715602 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:10.817177 kubelet[2176]: E0414 00:17:10.817026 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:10.920581 kubelet[2176]: E0414 00:17:10.918785 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.022170 kubelet[2176]: E0414 00:17:11.021920 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.124577 kubelet[2176]: E0414 00:17:11.122759 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.224237 kubelet[2176]: E0414 00:17:11.224040 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.325756 kubelet[2176]: E0414 00:17:11.324693 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.428775 kubelet[2176]: E0414 00:17:11.428534 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.530862 kubelet[2176]: E0414 00:17:11.530362 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.634745 kubelet[2176]: E0414 00:17:11.634588 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.739044 kubelet[2176]: E0414 00:17:11.738790 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.841226 kubelet[2176]: E0414 00:17:11.840779 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:11.897441 kubelet[2176]: E0414 00:17:11.889803 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:17:11.942235 kubelet[2176]: E0414 00:17:11.941222 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.042780 kubelet[2176]: E0414 00:17:12.042648 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.145266 kubelet[2176]: E0414 00:17:12.145146 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.248065 kubelet[2176]: E0414 00:17:12.246866 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.348282 kubelet[2176]: E0414 00:17:12.348159 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.483276 kubelet[2176]: E0414 00:17:12.483109 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.584731 kubelet[2176]: E0414 00:17:12.584313 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.685024 kubelet[2176]: E0414 00:17:12.684951 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.787235 kubelet[2176]: E0414 00:17:12.785929 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.887307 kubelet[2176]: E0414 00:17:12.887230 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:12.995705 kubelet[2176]: E0414 00:17:12.992643 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:13.094114 kubelet[2176]: E0414 00:17:13.093906 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:13.214587 kubelet[2176]: E0414 00:17:13.210455 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:13.313915 kubelet[2176]: E0414 00:17:13.313455 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:13.433825 kubelet[2176]: E0414 00:17:13.433714 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:13.535579 kubelet[2176]: E0414 00:17:13.534323 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:13.636031 kubelet[2176]: E0414 00:17:13.635930 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:13.738288 kubelet[2176]: E0414 00:17:13.737289 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:13.840919 kubelet[2176]: E0414 00:17:13.838033 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:13.944069 kubelet[2176]: E0414 00:17:13.941884 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:14.044987 kubelet[2176]: E0414 00:17:14.042768 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:14.146313 kubelet[2176]: E0414 00:17:14.145656 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:14.248434 kubelet[2176]: E0414 00:17:14.246967 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:14.387034 kubelet[2176]: E0414 00:17:14.386742 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:14.487725 kubelet[2176]: E0414 00:17:14.487269 2176 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 00:17:14.809398 kubelet[2176]: E0414 00:17:14.808909 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:14.910687 kubelet[2176]: E0414 00:17:14.910396 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:15.011152 kubelet[2176]: E0414 00:17:15.010967 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:15.140856 kubelet[2176]: I0414 00:17:15.140004 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:17:15.387322 kubelet[2176]: I0414 00:17:15.386919 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:17:15.544057 kubelet[2176]: I0414 00:17:15.543321 2176 apiserver.go:52] "Watching apiserver" Apr 14 00:17:15.587553 kubelet[2176]: I0414 00:17:15.580393 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:17:15.605139 kubelet[2176]: E0414 00:17:15.604952 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:15.608469 kubelet[2176]: E0414 00:17:15.606360 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:15.641763 kubelet[2176]: E0414 00:17:15.641574 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:15.647837 kubelet[2176]: I0414 00:17:15.647739 2176 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 00:17:22.698123 kubelet[2176]: I0414 00:17:22.697843 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.697820819 podStartE2EDuration="7.697820819s" podCreationTimestamp="2026-04-14 00:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:17:22.347915523 +0000 UTC m=+72.405800394" watchObservedRunningTime="2026-04-14 00:17:22.697820819 +0000 UTC m=+72.755705723" Apr 14 00:17:23.320157 kubelet[2176]: I0414 00:17:23.319934 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.319902436 podStartE2EDuration="8.319902436s" podCreationTimestamp="2026-04-14 00:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:17:22.69871102 +0000 UTC m=+72.756595889" watchObservedRunningTime="2026-04-14 00:17:23.319902436 +0000 UTC m=+73.377787315" Apr 14 00:17:32.314922 systemd[1]: cri-containerd-5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869.scope: Deactivated successfully. Apr 14 00:17:32.316860 systemd[1]: cri-containerd-5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869.scope: Consumed 4.878s CPU time. Apr 14 00:17:32.467717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869-rootfs.mount: Deactivated successfully. Apr 14 00:17:32.476753 containerd[1477]: time="2026-04-14T00:17:32.476639204Z" level=info msg="shim disconnected" id=5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869 namespace=k8s.io Apr 14 00:17:32.476753 containerd[1477]: time="2026-04-14T00:17:32.476742804Z" level=warning msg="cleaning up after shim disconnected" id=5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869 namespace=k8s.io Apr 14 00:17:32.476753 containerd[1477]: time="2026-04-14T00:17:32.476754779Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:17:33.012902 kubelet[2176]: I0414 00:17:33.012794 2176 scope.go:117] "RemoveContainer" containerID="5ce41ad39e249734da6aebc06a9ee3aab2ef1a8bbc28cf4d3d11146a88e25869" Apr 14 00:17:33.015026 kubelet[2176]: E0414 00:17:33.014913 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:33.021723 containerd[1477]: time="2026-04-14T00:17:33.021530343Z" level=info msg="CreateContainer within sandbox \"19dea8460db2dc4b40229134ba5177c003d62dff764dda702c66cf898c04f0cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 14 00:17:33.103398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1402981933.mount: Deactivated successfully. Apr 14 00:17:33.123177 containerd[1477]: time="2026-04-14T00:17:33.121357765Z" level=info msg="CreateContainer within sandbox \"19dea8460db2dc4b40229134ba5177c003d62dff764dda702c66cf898c04f0cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2368b0d41d4b59f52b60ef82166cf0e3a792d1054e23cd031a1853a7b5c1e028\"" Apr 14 00:17:33.130501 containerd[1477]: time="2026-04-14T00:17:33.128363801Z" level=info msg="StartContainer for \"2368b0d41d4b59f52b60ef82166cf0e3a792d1054e23cd031a1853a7b5c1e028\"" Apr 14 00:17:33.367768 systemd[1]: Started cri-containerd-2368b0d41d4b59f52b60ef82166cf0e3a792d1054e23cd031a1853a7b5c1e028.scope - libcontainer container 2368b0d41d4b59f52b60ef82166cf0e3a792d1054e23cd031a1853a7b5c1e028. Apr 14 00:17:33.438202 kubelet[2176]: I0414 00:17:33.438019 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=18.438000205 podStartE2EDuration="18.438000205s" podCreationTimestamp="2026-04-14 00:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:17:23.329354243 +0000 UTC m=+73.387239112" watchObservedRunningTime="2026-04-14 00:17:33.438000205 +0000 UTC m=+83.495885093" Apr 14 00:17:33.592109 containerd[1477]: time="2026-04-14T00:17:33.591025138Z" level=info msg="StartContainer for \"2368b0d41d4b59f52b60ef82166cf0e3a792d1054e23cd031a1853a7b5c1e028\" returns successfully" Apr 14 00:17:34.042095 kubelet[2176]: E0414 00:17:34.038875 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:40.609898 kubelet[2176]: E0414 00:17:40.604091 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:42.617725 systemd[1]: Reloading requested from client PID 2541 ('systemctl') (unit session-5.scope)... Apr 14 00:17:42.617741 systemd[1]: Reloading... Apr 14 00:17:42.862479 zram_generator::config[2580]: No configuration found. Apr 14 00:17:42.867018 kubelet[2176]: E0414 00:17:42.866845 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:43.225185 kubelet[2176]: E0414 00:17:43.224960 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:43.228286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:17:43.474639 systemd[1]: Reloading finished in 856 ms. Apr 14 00:17:43.568846 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:17:43.596355 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 00:17:43.597198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:17:43.601865 systemd[1]: kubelet.service: Consumed 11.746s CPU time, 140.0M memory peak, 0B memory swap peak. Apr 14 00:17:43.615818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:17:44.017143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:17:44.056935 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:17:44.365598 kubelet[2624]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:17:44.365598 kubelet[2624]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:17:44.365598 kubelet[2624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:17:44.366527 kubelet[2624]: I0414 00:17:44.365710 2624 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:17:44.381882 kubelet[2624]: I0414 00:17:44.381780 2624 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 00:17:44.381882 kubelet[2624]: I0414 00:17:44.381863 2624 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:17:44.384369 kubelet[2624]: I0414 00:17:44.383600 2624 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:17:44.388213 kubelet[2624]: I0414 00:17:44.387900 2624 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 00:17:44.397831 kubelet[2624]: I0414 00:17:44.396270 2624 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:17:44.419950 kubelet[2624]: E0414 00:17:44.419830 2624 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:17:44.419950 kubelet[2624]: I0414 00:17:44.419929 2624 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 00:17:44.520744 kubelet[2624]: I0414 00:17:44.520594 2624 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 00:17:44.530206 kubelet[2624]: I0414 00:17:44.527971 2624 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:17:44.538692 kubelet[2624]: I0414 00:17:44.534046 2624 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 00:17:44.538692 kubelet[2624]: I0414 00:17:44.537631 2624 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:17:44.538692 kubelet[2624]: I0414 00:17:44.537813 2624 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 00:17:44.538692 kubelet[2624]: I0414 00:17:44.537922 2624 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:17:44.542908 kubelet[2624]: I0414 00:17:44.541689 2624 kubelet.go:480] "Attempting to sync node with API server" Apr 14 00:17:44.543141 kubelet[2624]: I0414 00:17:44.542945 2624 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:17:44.543141 kubelet[2624]: I0414 00:17:44.543020 2624 kubelet.go:386] "Adding apiserver pod source" Apr 14 00:17:44.545625 kubelet[2624]: I0414 00:17:44.545535 2624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:17:44.555534 kubelet[2624]: I0414 00:17:44.555306 2624 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:17:44.558253 kubelet[2624]: I0414 00:17:44.558004 2624 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:17:44.578555 kubelet[2624]: I0414 00:17:44.576545 2624 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 00:17:44.578555 kubelet[2624]: I0414 00:17:44.576656 2624 server.go:1289] "Started kubelet" Apr 14 00:17:44.581473 kubelet[2624]: I0414 00:17:44.581251 2624 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:17:44.582238 kubelet[2624]: I0414 00:17:44.582068 2624 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:17:44.582987 kubelet[2624]: I0414 00:17:44.582793 2624 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:17:44.583179 kubelet[2624]: I0414 00:17:44.583026 2624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:17:44.595054 kubelet[2624]: I0414 00:17:44.594944 2624 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:17:44.601611 kubelet[2624]: I0414 00:17:44.599998 2624 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:17:44.614676 kubelet[2624]: I0414 00:17:44.612249 2624 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 00:17:44.614676 kubelet[2624]: E0414 00:17:44.614612 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:44.623133 kubelet[2624]: I0414 00:17:44.622247 2624 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 00:17:44.626489 kubelet[2624]: I0414 00:17:44.624350 2624 reconciler.go:26] "Reconciler: start to sync state" Apr 14 00:17:44.642165 kubelet[2624]: I0414 00:17:44.641964 2624 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:17:44.642683 kubelet[2624]: I0414 00:17:44.642250 2624 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:17:44.644238 kubelet[2624]: I0414 00:17:44.643755 2624 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:17:44.682992 kubelet[2624]: E0414 00:17:44.682844 2624 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:17:44.715899 kubelet[2624]: E0414 00:17:44.715832 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:44.744040 kubelet[2624]: I0414 00:17:44.741017 2624 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 00:17:44.783790 kubelet[2624]: I0414 00:17:44.775267 2624 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 00:17:44.783790 kubelet[2624]: I0414 00:17:44.775671 2624 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 00:17:44.783790 kubelet[2624]: I0414 00:17:44.775968 2624 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:17:44.783790 kubelet[2624]: I0414 00:17:44.775982 2624 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 00:17:44.783790 kubelet[2624]: E0414 00:17:44.776043 2624 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:17:44.817749 kubelet[2624]: E0414 00:17:44.817695 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:44.910323 kubelet[2624]: E0414 00:17:44.909522 2624 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:17:44.919945 kubelet[2624]: E0414 00:17:44.919561 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:45.022724 kubelet[2624]: E0414 00:17:45.022339 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:45.115795 kubelet[2624]: E0414 00:17:45.114129 2624 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:17:45.124853 kubelet[2624]: E0414 00:17:45.124670 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:45.233973 kubelet[2624]: E0414 00:17:45.231027 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:17:45.328065 kubelet[2624]: I0414 00:17:45.327836 2624 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:17:45.328065 kubelet[2624]: I0414 00:17:45.327917 2624 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:17:45.328065 kubelet[2624]: I0414 00:17:45.327957 2624 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:17:45.329249 kubelet[2624]: I0414 00:17:45.329128 2624 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 00:17:45.329249 kubelet[2624]: I0414 00:17:45.329142 2624 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 00:17:45.329249 kubelet[2624]: I0414 00:17:45.329172 2624 policy_none.go:49] "None policy: Start" Apr 14 00:17:45.329249 kubelet[2624]: I0414 00:17:45.329193 2624 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 00:17:45.329249 kubelet[2624]: I0414 00:17:45.329206 2624 state_mem.go:35] "Initializing new in-memory state store" Apr 14 00:17:45.334803 kubelet[2624]: I0414 00:17:45.329742 2624 state_mem.go:75] "Updated machine memory state" Apr 14 00:17:45.432541 kubelet[2624]: E0414 00:17:45.430997 2624 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:17:45.444481 kubelet[2624]: I0414 00:17:45.444330 2624 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:17:45.521611 kubelet[2624]: E0414 00:17:45.521293 2624 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:17:45.523666 kubelet[2624]: I0414 00:17:45.444540 2624 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:17:45.525928 kubelet[2624]: I0414 00:17:45.525669 2624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:17:45.548634 kubelet[2624]: I0414 00:17:45.547871 2624 apiserver.go:52] "Watching apiserver" Apr 14 00:17:45.555830 kubelet[2624]: E0414 00:17:45.553334 2624 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:17:45.700173 kubelet[2624]: I0414 00:17:45.699950 2624 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:17:46.027046 kubelet[2624]: I0414 00:17:46.026941 2624 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 00:17:46.028037 kubelet[2624]: I0414 00:17:46.027815 2624 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:17:46.332573 kubelet[2624]: I0414 00:17:46.330199 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:17:46.332573 kubelet[2624]: I0414 00:17:46.331388 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:17:46.337768 kubelet[2624]: I0414 00:17:46.337378 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:17:46.338370 kubelet[2624]: I0414 00:17:46.338147 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:17:46.338370 kubelet[2624]: I0414 00:17:46.338247 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:17:46.338370 kubelet[2624]: I0414 00:17:46.338330 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:17:46.338370 kubelet[2624]: I0414 00:17:46.338350 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:17:46.339225 kubelet[2624]: I0414 00:17:46.338979 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:17:46.424322 kubelet[2624]: I0414 00:17:46.424020 2624 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 00:17:46.441589 kubelet[2624]: I0414 00:17:46.441473 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/910620ab97bd565e57355b3584f4fd7d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"910620ab97bd565e57355b3584f4fd7d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:17:46.442576 kubelet[2624]: I0414 00:17:46.442147 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:17:46.442576 kubelet[2624]: I0414 00:17:46.442254 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/910620ab97bd565e57355b3584f4fd7d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"910620ab97bd565e57355b3584f4fd7d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:17:46.442691 kubelet[2624]: I0414 00:17:46.442663 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/910620ab97bd565e57355b3584f4fd7d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"910620ab97bd565e57355b3584f4fd7d\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:17:46.614074 kubelet[2624]: E0414 00:17:46.613793 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:17:46.617324 kubelet[2624]: E0414 00:17:46.615003 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:46.622558 kubelet[2624]: E0414 00:17:46.619033 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 14 00:17:46.622558 kubelet[2624]: E0414 00:17:46.619377 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:46.622558 kubelet[2624]: E0414 00:17:46.620684 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 00:17:46.622558 kubelet[2624]: E0414 00:17:46.621677 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:47.017074 kubelet[2624]: E0414 00:17:47.015865 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:47.019034 kubelet[2624]: E0414 00:17:47.018969 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:47.023999 kubelet[2624]: E0414 00:17:47.023698 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:48.034027 kubelet[2624]: E0414 00:17:48.033899 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:48.037559 kubelet[2624]: E0414 00:17:48.036222 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:49.127658 kubelet[2624]: E0414 00:17:49.126899 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:50.123454 kubelet[2624]: E0414 00:17:50.123000 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:54.881913 kubelet[2624]: E0414 00:17:54.881678 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:55.216892 kubelet[2624]: E0414 00:17:55.216667 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:55.597783 sudo[1617]: pam_unix(sudo:session): session closed for user root Apr 14 00:17:55.601939 sshd[1614]: pam_unix(sshd:session): session closed for user core Apr 14 00:17:55.614702 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Apr 14 00:17:55.615721 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:46872.service: Deactivated successfully. Apr 14 00:17:55.620005 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 00:17:55.621202 systemd[1]: session-5.scope: Consumed 7.268s CPU time, 163.2M memory peak, 0B memory swap peak. Apr 14 00:17:55.630119 systemd-logind[1460]: Removed session 5. Apr 14 00:17:55.911160 kubelet[2624]: E0414 00:17:55.911079 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:17:56.229448 kubelet[2624]: E0414 00:17:56.227963 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:30.235272 kubelet[2624]: I0414 00:18:30.222531 2624 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 00:18:30.235272 kubelet[2624]: I0414 00:18:30.227667 2624 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 00:18:30.237391 containerd[1477]: time="2026-04-14T00:18:30.226286740Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 00:18:33.710500 kubelet[2624]: I0414 00:18:33.710253 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c381b8fa-a6cb-47f1-8cfa-1f8638abba53-lib-modules\") pod \"kube-proxy-vbmvw\" (UID: \"c381b8fa-a6cb-47f1-8cfa-1f8638abba53\") " pod="kube-system/kube-proxy-vbmvw" Apr 14 00:18:33.710500 kubelet[2624]: I0414 00:18:33.710339 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c381b8fa-a6cb-47f1-8cfa-1f8638abba53-kube-proxy\") pod \"kube-proxy-vbmvw\" (UID: \"c381b8fa-a6cb-47f1-8cfa-1f8638abba53\") " pod="kube-system/kube-proxy-vbmvw" Apr 14 00:18:33.710500 kubelet[2624]: I0414 00:18:33.710363 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c381b8fa-a6cb-47f1-8cfa-1f8638abba53-xtables-lock\") pod \"kube-proxy-vbmvw\" (UID: \"c381b8fa-a6cb-47f1-8cfa-1f8638abba53\") " pod="kube-system/kube-proxy-vbmvw" Apr 14 00:18:33.710500 kubelet[2624]: I0414 00:18:33.710386 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf5k4\" (UniqueName: \"kubernetes.io/projected/c381b8fa-a6cb-47f1-8cfa-1f8638abba53-kube-api-access-zf5k4\") pod \"kube-proxy-vbmvw\" (UID: \"c381b8fa-a6cb-47f1-8cfa-1f8638abba53\") " pod="kube-system/kube-proxy-vbmvw" Apr 14 00:18:33.715361 systemd[1]: Created slice kubepods-besteffort-podc381b8fa_a6cb_47f1_8cfa_1f8638abba53.slice - libcontainer container kubepods-besteffort-podc381b8fa_a6cb_47f1_8cfa_1f8638abba53.slice. Apr 14 00:18:34.265513 kubelet[2624]: E0414 00:18:34.265267 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 14 00:18:34.499532 systemd[1]: Created slice kubepods-burstable-pod35c88a84_3dbd_491d_8e69_6510b0f78010.slice - libcontainer container kubepods-burstable-pod35c88a84_3dbd_491d_8e69_6510b0f78010.slice. Apr 14 00:18:34.628654 kubelet[2624]: I0414 00:18:34.628364 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/35c88a84-3dbd-491d-8e69-6510b0f78010-cni\") pod \"kube-flannel-ds-dxl9z\" (UID: \"35c88a84-3dbd-491d-8e69-6510b0f78010\") " pod="kube-flannel/kube-flannel-ds-dxl9z" Apr 14 00:18:34.630221 kubelet[2624]: I0414 00:18:34.630145 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/35c88a84-3dbd-491d-8e69-6510b0f78010-flannel-cfg\") pod \"kube-flannel-ds-dxl9z\" (UID: \"35c88a84-3dbd-491d-8e69-6510b0f78010\") " pod="kube-flannel/kube-flannel-ds-dxl9z" Apr 14 00:18:34.631241 kubelet[2624]: I0414 00:18:34.630777 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/35c88a84-3dbd-491d-8e69-6510b0f78010-run\") pod \"kube-flannel-ds-dxl9z\" (UID: \"35c88a84-3dbd-491d-8e69-6510b0f78010\") " pod="kube-flannel/kube-flannel-ds-dxl9z" Apr 14 00:18:34.632371 kubelet[2624]: I0414 00:18:34.632301 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdgqx\" (UniqueName: \"kubernetes.io/projected/35c88a84-3dbd-491d-8e69-6510b0f78010-kube-api-access-sdgqx\") pod \"kube-flannel-ds-dxl9z\" (UID: \"35c88a84-3dbd-491d-8e69-6510b0f78010\") " pod="kube-flannel/kube-flannel-ds-dxl9z" Apr 14 00:18:34.632681 kubelet[2624]: I0414 00:18:34.632626 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/35c88a84-3dbd-491d-8e69-6510b0f78010-cni-plugin\") pod \"kube-flannel-ds-dxl9z\" (UID: \"35c88a84-3dbd-491d-8e69-6510b0f78010\") " pod="kube-flannel/kube-flannel-ds-dxl9z" Apr 14 00:18:34.633275 kubelet[2624]: I0414 00:18:34.633203 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35c88a84-3dbd-491d-8e69-6510b0f78010-xtables-lock\") pod \"kube-flannel-ds-dxl9z\" (UID: \"35c88a84-3dbd-491d-8e69-6510b0f78010\") " pod="kube-flannel/kube-flannel-ds-dxl9z" Apr 14 00:18:35.629679 kubelet[2624]: E0414 00:18:35.619134 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:35.630924 containerd[1477]: time="2026-04-14T00:18:35.624169428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vbmvw,Uid:c381b8fa-a6cb-47f1-8cfa-1f8638abba53,Namespace:kube-system,Attempt:0,}" Apr 14 00:18:35.713507 kubelet[2624]: E0414 00:18:35.711982 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:35.716331 containerd[1477]: time="2026-04-14T00:18:35.716092517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dxl9z,Uid:35c88a84-3dbd-491d-8e69-6510b0f78010,Namespace:kube-flannel,Attempt:0,}" Apr 14 00:18:35.993312 containerd[1477]: time="2026-04-14T00:18:35.989583827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:18:35.993312 containerd[1477]: time="2026-04-14T00:18:35.989724463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:18:35.993312 containerd[1477]: time="2026-04-14T00:18:35.989740184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:18:36.000501 containerd[1477]: time="2026-04-14T00:18:35.996959154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:18:36.082896 systemd[1]: Started cri-containerd-d434c123c5cf9403cc0a204df5b0286fe562b87b2c38e8fae4ddc759767caf85.scope - libcontainer container d434c123c5cf9403cc0a204df5b0286fe562b87b2c38e8fae4ddc759767caf85. Apr 14 00:18:36.217522 containerd[1477]: time="2026-04-14T00:18:36.215116312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:18:36.217522 containerd[1477]: time="2026-04-14T00:18:36.215208960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:18:36.217522 containerd[1477]: time="2026-04-14T00:18:36.215243326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:18:36.217522 containerd[1477]: time="2026-04-14T00:18:36.215347529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:18:36.701077 systemd[1]: Started cri-containerd-c4cf1ac99a1ea68cb821a4b20ea56657d5907175d7a8b088bae8d2b10a1143c2.scope - libcontainer container c4cf1ac99a1ea68cb821a4b20ea56657d5907175d7a8b088bae8d2b10a1143c2. Apr 14 00:18:37.097049 containerd[1477]: time="2026-04-14T00:18:37.096682285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vbmvw,Uid:c381b8fa-a6cb-47f1-8cfa-1f8638abba53,Namespace:kube-system,Attempt:0,} returns sandbox id \"d434c123c5cf9403cc0a204df5b0286fe562b87b2c38e8fae4ddc759767caf85\"" Apr 14 00:18:37.108865 kubelet[2624]: E0414 00:18:37.107229 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:37.165088 containerd[1477]: time="2026-04-14T00:18:37.164917542Z" level=info msg="CreateContainer within sandbox \"d434c123c5cf9403cc0a204df5b0286fe562b87b2c38e8fae4ddc759767caf85\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 00:18:37.242950 containerd[1477]: time="2026-04-14T00:18:37.242846157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dxl9z,Uid:35c88a84-3dbd-491d-8e69-6510b0f78010,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c4cf1ac99a1ea68cb821a4b20ea56657d5907175d7a8b088bae8d2b10a1143c2\"" Apr 14 00:18:37.312759 kubelet[2624]: E0414 00:18:37.312643 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:37.350377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429184618.mount: Deactivated successfully. Apr 14 00:18:37.361011 containerd[1477]: time="2026-04-14T00:18:37.360959563Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 14 00:18:37.402477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241489580.mount: Deactivated successfully. Apr 14 00:18:37.419021 containerd[1477]: time="2026-04-14T00:18:37.416286897Z" level=info msg="CreateContainer within sandbox \"d434c123c5cf9403cc0a204df5b0286fe562b87b2c38e8fae4ddc759767caf85\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef48d9a7b3c3b53998531c0acb91cfb7625b1b253923943a850616b107b076a3\"" Apr 14 00:18:37.443343 containerd[1477]: time="2026-04-14T00:18:37.442189257Z" level=info msg="StartContainer for \"ef48d9a7b3c3b53998531c0acb91cfb7625b1b253923943a850616b107b076a3\"" Apr 14 00:18:37.693994 systemd[1]: Started cri-containerd-ef48d9a7b3c3b53998531c0acb91cfb7625b1b253923943a850616b107b076a3.scope - libcontainer container ef48d9a7b3c3b53998531c0acb91cfb7625b1b253923943a850616b107b076a3. Apr 14 00:18:38.029687 containerd[1477]: time="2026-04-14T00:18:38.029277705Z" level=info msg="StartContainer for \"ef48d9a7b3c3b53998531c0acb91cfb7625b1b253923943a850616b107b076a3\" returns successfully" Apr 14 00:18:38.233852 kubelet[2624]: E0414 00:18:38.230714 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:40.313015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1127777254.mount: Deactivated successfully. Apr 14 00:18:40.556268 containerd[1477]: time="2026-04-14T00:18:40.555818836Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:18:40.564964 containerd[1477]: time="2026-04-14T00:18:40.564235507Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Apr 14 00:18:40.568486 containerd[1477]: time="2026-04-14T00:18:40.567944794Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:18:40.582277 containerd[1477]: time="2026-04-14T00:18:40.582040677Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:18:40.587565 containerd[1477]: time="2026-04-14T00:18:40.586293344Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 3.216008021s" Apr 14 00:18:40.587565 containerd[1477]: time="2026-04-14T00:18:40.586357162Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 14 00:18:40.693598 containerd[1477]: time="2026-04-14T00:18:40.693534092Z" level=info msg="CreateContainer within sandbox \"c4cf1ac99a1ea68cb821a4b20ea56657d5907175d7a8b088bae8d2b10a1143c2\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 14 00:18:40.777873 containerd[1477]: time="2026-04-14T00:18:40.777747237Z" level=info msg="CreateContainer within sandbox \"c4cf1ac99a1ea68cb821a4b20ea56657d5907175d7a8b088bae8d2b10a1143c2\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf\"" Apr 14 00:18:40.792462 containerd[1477]: time="2026-04-14T00:18:40.789884208Z" level=info msg="StartContainer for \"1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf\"" Apr 14 00:18:40.943393 systemd[1]: run-containerd-runc-k8s.io-1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf-runc.uGl83O.mount: Deactivated successfully. Apr 14 00:18:40.979702 systemd[1]: Started cri-containerd-1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf.scope - libcontainer container 1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf. Apr 14 00:18:41.170518 systemd[1]: cri-containerd-1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf.scope: Deactivated successfully. Apr 14 00:18:41.185132 containerd[1477]: time="2026-04-14T00:18:41.185026830Z" level=info msg="StartContainer for \"1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf\" returns successfully" Apr 14 00:18:41.434156 kubelet[2624]: E0414 00:18:41.434037 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:41.439726 containerd[1477]: time="2026-04-14T00:18:41.439595723Z" level=info msg="shim disconnected" id=1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf namespace=k8s.io Apr 14 00:18:41.439726 containerd[1477]: time="2026-04-14T00:18:41.439681622Z" level=warning msg="cleaning up after shim disconnected" id=1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf namespace=k8s.io Apr 14 00:18:41.439726 containerd[1477]: time="2026-04-14T00:18:41.439693917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:18:41.731652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e6fe5694ed9aab5ca826eff0e5e7439d9bd224998c685013d945e909cd81acf-rootfs.mount: Deactivated successfully. Apr 14 00:18:41.908665 kubelet[2624]: I0414 00:18:41.908325 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vbmvw" podStartSLOduration=8.908269146 podStartE2EDuration="8.908269146s" podCreationTimestamp="2026-04-14 00:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:18:41.905189867 +0000 UTC m=+57.804952066" watchObservedRunningTime="2026-04-14 00:18:41.908269146 +0000 UTC m=+57.808031350" Apr 14 00:18:42.476512 kubelet[2624]: E0414 00:18:42.476339 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:42.481368 containerd[1477]: time="2026-04-14T00:18:42.481248112Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 14 00:18:50.119320 containerd[1477]: time="2026-04-14T00:18:50.119068279Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:18:50.125337 containerd[1477]: time="2026-04-14T00:18:50.123808543Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Apr 14 00:18:50.133911 containerd[1477]: time="2026-04-14T00:18:50.132713568Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:18:50.213232 containerd[1477]: time="2026-04-14T00:18:50.212950454Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:18:50.227772 containerd[1477]: time="2026-04-14T00:18:50.227660406Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 7.746318209s" Apr 14 00:18:50.227772 containerd[1477]: time="2026-04-14T00:18:50.227745414Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 14 00:18:50.261771 containerd[1477]: time="2026-04-14T00:18:50.261682392Z" level=info msg="CreateContainer within sandbox \"c4cf1ac99a1ea68cb821a4b20ea56657d5907175d7a8b088bae8d2b10a1143c2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 00:18:50.298917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782503325.mount: Deactivated successfully. Apr 14 00:18:50.308098 containerd[1477]: time="2026-04-14T00:18:50.305032984Z" level=info msg="CreateContainer within sandbox \"c4cf1ac99a1ea68cb821a4b20ea56657d5907175d7a8b088bae8d2b10a1143c2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"abdf293d15979b65c6f0a0655f53858f857eafac3c243d99b67549c77d49a534\"" Apr 14 00:18:50.322696 containerd[1477]: time="2026-04-14T00:18:50.318458238Z" level=info msg="StartContainer for \"abdf293d15979b65c6f0a0655f53858f857eafac3c243d99b67549c77d49a534\"" Apr 14 00:18:50.471115 systemd[1]: Started cri-containerd-abdf293d15979b65c6f0a0655f53858f857eafac3c243d99b67549c77d49a534.scope - libcontainer container abdf293d15979b65c6f0a0655f53858f857eafac3c243d99b67549c77d49a534. Apr 14 00:18:50.616211 systemd[1]: cri-containerd-abdf293d15979b65c6f0a0655f53858f857eafac3c243d99b67549c77d49a534.scope: Deactivated successfully. Apr 14 00:18:50.632817 containerd[1477]: time="2026-04-14T00:18:50.632735471Z" level=info msg="StartContainer for \"abdf293d15979b65c6f0a0655f53858f857eafac3c243d99b67549c77d49a534\" returns successfully" Apr 14 00:18:50.745325 kubelet[2624]: I0414 00:18:50.744858 2624 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 14 00:18:50.818585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abdf293d15979b65c6f0a0655f53858f857eafac3c243d99b67549c77d49a534-rootfs.mount: Deactivated successfully. Apr 14 00:18:50.885309 kubelet[2624]: E0414 00:18:50.878284 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:50.974730 containerd[1477]: time="2026-04-14T00:18:50.974458193Z" level=info msg="shim disconnected" id=abdf293d15979b65c6f0a0655f53858f857eafac3c243d99b67549c77d49a534 namespace=k8s.io Apr 14 00:18:50.974730 containerd[1477]: time="2026-04-14T00:18:50.974648587Z" level=warning msg="cleaning up after shim disconnected" id=abdf293d15979b65c6f0a0655f53858f857eafac3c243d99b67549c77d49a534 namespace=k8s.io Apr 14 00:18:50.974730 containerd[1477]: time="2026-04-14T00:18:50.974693625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:18:51.912958 kubelet[2624]: E0414 00:18:51.911569 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:51.986635 containerd[1477]: time="2026-04-14T00:18:51.985603393Z" level=info msg="CreateContainer within sandbox \"c4cf1ac99a1ea68cb821a4b20ea56657d5907175d7a8b088bae8d2b10a1143c2\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 14 00:18:52.096073 containerd[1477]: time="2026-04-14T00:18:52.095279226Z" level=info msg="CreateContainer within sandbox \"c4cf1ac99a1ea68cb821a4b20ea56657d5907175d7a8b088bae8d2b10a1143c2\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"911f3741e4488509a70ac3c4cb3ed098726d65e0c3000b2b96f31cb42facdad9\"" Apr 14 00:18:52.109421 containerd[1477]: time="2026-04-14T00:18:52.109049447Z" level=info msg="StartContainer for \"911f3741e4488509a70ac3c4cb3ed098726d65e0c3000b2b96f31cb42facdad9\"" Apr 14 00:18:52.429962 kubelet[2624]: I0414 00:18:52.429858 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm924\" (UniqueName: \"kubernetes.io/projected/07ec18f5-ad32-4be0-9206-65cfe5d2a0f3-kube-api-access-hm924\") pod \"coredns-674b8bbfcf-6wnjd\" (UID: \"07ec18f5-ad32-4be0-9206-65cfe5d2a0f3\") " pod="kube-system/coredns-674b8bbfcf-6wnjd" Apr 14 00:18:52.430329 kubelet[2624]: I0414 00:18:52.429953 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07ec18f5-ad32-4be0-9206-65cfe5d2a0f3-config-volume\") pod \"coredns-674b8bbfcf-6wnjd\" (UID: \"07ec18f5-ad32-4be0-9206-65cfe5d2a0f3\") " pod="kube-system/coredns-674b8bbfcf-6wnjd" Apr 14 00:18:52.486205 systemd[1]: Created slice kubepods-burstable-pod07ec18f5_ad32_4be0_9206_65cfe5d2a0f3.slice - libcontainer container kubepods-burstable-pod07ec18f5_ad32_4be0_9206_65cfe5d2a0f3.slice. Apr 14 00:18:52.500752 systemd[1]: Started cri-containerd-911f3741e4488509a70ac3c4cb3ed098726d65e0c3000b2b96f31cb42facdad9.scope - libcontainer container 911f3741e4488509a70ac3c4cb3ed098726d65e0c3000b2b96f31cb42facdad9. Apr 14 00:18:52.538834 systemd[1]: Created slice kubepods-burstable-pode8e485b7_c44f_406d_8c8d_2dec1814875c.slice - libcontainer container kubepods-burstable-pode8e485b7_c44f_406d_8c8d_2dec1814875c.slice. Apr 14 00:18:52.539780 kubelet[2624]: I0414 00:18:52.539244 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8e485b7-c44f-406d-8c8d-2dec1814875c-config-volume\") pod \"coredns-674b8bbfcf-5jkt9\" (UID: \"e8e485b7-c44f-406d-8c8d-2dec1814875c\") " pod="kube-system/coredns-674b8bbfcf-5jkt9" Apr 14 00:18:52.539780 kubelet[2624]: I0414 00:18:52.539373 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9ctx\" (UniqueName: \"kubernetes.io/projected/e8e485b7-c44f-406d-8c8d-2dec1814875c-kube-api-access-j9ctx\") pod \"coredns-674b8bbfcf-5jkt9\" (UID: \"e8e485b7-c44f-406d-8c8d-2dec1814875c\") " pod="kube-system/coredns-674b8bbfcf-5jkt9" Apr 14 00:18:52.871539 containerd[1477]: time="2026-04-14T00:18:52.871480312Z" level=info msg="StartContainer for \"911f3741e4488509a70ac3c4cb3ed098726d65e0c3000b2b96f31cb42facdad9\" returns successfully" Apr 14 00:18:53.021620 kubelet[2624]: E0414 00:18:53.016681 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:53.142231 kubelet[2624]: E0414 00:18:53.141822 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:53.148360 containerd[1477]: time="2026-04-14T00:18:53.148109604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6wnjd,Uid:07ec18f5-ad32-4be0-9206-65cfe5d2a0f3,Namespace:kube-system,Attempt:0,}" Apr 14 00:18:53.263134 kubelet[2624]: E0414 00:18:53.262855 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:53.264372 containerd[1477]: time="2026-04-14T00:18:53.263684862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5jkt9,Uid:e8e485b7-c44f-406d-8c8d-2dec1814875c,Namespace:kube-system,Attempt:0,}" Apr 14 00:18:53.451021 containerd[1477]: time="2026-04-14T00:18:53.450775748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6wnjd,Uid:07ec18f5-ad32-4be0-9206-65cfe5d2a0f3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f337745560d2bd000eddc28583b4d97f210d89da0130a035fcdbafc84289ff8c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:18:53.461524 kubelet[2624]: E0414 00:18:53.461338 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f337745560d2bd000eddc28583b4d97f210d89da0130a035fcdbafc84289ff8c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:18:53.464382 kubelet[2624]: E0414 00:18:53.462707 2624 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f337745560d2bd000eddc28583b4d97f210d89da0130a035fcdbafc84289ff8c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-6wnjd" Apr 14 00:18:53.464382 kubelet[2624]: E0414 00:18:53.462794 2624 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f337745560d2bd000eddc28583b4d97f210d89da0130a035fcdbafc84289ff8c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-6wnjd" Apr 14 00:18:53.464382 kubelet[2624]: E0414 00:18:53.463832 2624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6wnjd_kube-system(07ec18f5-ad32-4be0-9206-65cfe5d2a0f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6wnjd_kube-system(07ec18f5-ad32-4be0-9206-65cfe5d2a0f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f337745560d2bd000eddc28583b4d97f210d89da0130a035fcdbafc84289ff8c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-6wnjd" podUID="07ec18f5-ad32-4be0-9206-65cfe5d2a0f3" Apr 14 00:18:53.487692 containerd[1477]: time="2026-04-14T00:18:53.487317694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5jkt9,Uid:e8e485b7-c44f-406d-8c8d-2dec1814875c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ceecbf1782fc44e16f020d16c7d20a48fb853ebeb17dff8f1b08ce1e8110178\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:18:53.503532 kubelet[2624]: E0414 00:18:53.502998 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ceecbf1782fc44e16f020d16c7d20a48fb853ebeb17dff8f1b08ce1e8110178\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 14 00:18:53.503532 kubelet[2624]: E0414 00:18:53.503530 2624 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ceecbf1782fc44e16f020d16c7d20a48fb853ebeb17dff8f1b08ce1e8110178\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-5jkt9" Apr 14 00:18:53.503532 kubelet[2624]: E0414 00:18:53.503563 2624 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ceecbf1782fc44e16f020d16c7d20a48fb853ebeb17dff8f1b08ce1e8110178\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-5jkt9" Apr 14 00:18:53.504000 kubelet[2624]: E0414 00:18:53.503638 2624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5jkt9_kube-system(e8e485b7-c44f-406d-8c8d-2dec1814875c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5jkt9_kube-system(e8e485b7-c44f-406d-8c8d-2dec1814875c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ceecbf1782fc44e16f020d16c7d20a48fb853ebeb17dff8f1b08ce1e8110178\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-5jkt9" podUID="e8e485b7-c44f-406d-8c8d-2dec1814875c" Apr 14 00:18:53.790675 kubelet[2624]: E0414 00:18:53.789068 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:54.044180 kubelet[2624]: E0414 00:18:54.043853 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:18:54.094847 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ceecbf1782fc44e16f020d16c7d20a48fb853ebeb17dff8f1b08ce1e8110178-shm.mount: Deactivated successfully. Apr 14 00:18:54.095955 systemd[1]: run-netns-cni\x2d8229f5f3\x2d5aed\x2d2b82\x2de635\x2d9dcca2826d85.mount: Deactivated successfully. Apr 14 00:18:54.096122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f337745560d2bd000eddc28583b4d97f210d89da0130a035fcdbafc84289ff8c-shm.mount: Deactivated successfully. Apr 14 00:18:54.884796 kubelet[2624]: I0414 00:18:54.884168 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-dxl9z" podStartSLOduration=8.961285839 podStartE2EDuration="21.883811868s" podCreationTimestamp="2026-04-14 00:18:33 +0000 UTC" firstStartedPulling="2026-04-14 00:18:37.32022838 +0000 UTC m=+53.219990576" lastFinishedPulling="2026-04-14 00:18:50.24275441 +0000 UTC m=+66.142516605" observedRunningTime="2026-04-14 00:18:54.730989691 +0000 UTC m=+70.630751895" watchObservedRunningTime="2026-04-14 00:18:54.883811868 +0000 UTC m=+70.783574062" Apr 14 00:18:55.523896 systemd-networkd[1397]: flannel.1: Link UP Apr 14 00:18:55.523904 systemd-networkd[1397]: flannel.1: Gained carrier Apr 14 00:18:57.349162 systemd-networkd[1397]: flannel.1: Gained IPv6LL Apr 14 00:19:00.794137 kubelet[2624]: E0414 00:19:00.791183 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:04.791619 kubelet[2624]: E0414 00:19:04.791584 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:04.796095 containerd[1477]: time="2026-04-14T00:19:04.795838277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5jkt9,Uid:e8e485b7-c44f-406d-8c8d-2dec1814875c,Namespace:kube-system,Attempt:0,}" Apr 14 00:19:05.002002 systemd-networkd[1397]: cni0: Link UP Apr 14 00:19:05.002043 systemd-networkd[1397]: cni0: Gained carrier Apr 14 00:19:05.034351 systemd-networkd[1397]: cni0: Lost carrier Apr 14 00:19:05.040596 kernel: cni0: port 1(vethabfd9da6) entered blocking state Apr 14 00:19:05.040729 kernel: cni0: port 1(vethabfd9da6) entered disabled state Apr 14 00:19:05.040744 kernel: vethabfd9da6: entered allmulticast mode Apr 14 00:19:05.040759 kernel: vethabfd9da6: entered promiscuous mode Apr 14 00:19:05.043796 kernel: cni0: port 1(vethabfd9da6) entered blocking state Apr 14 00:19:05.043927 kernel: cni0: port 1(vethabfd9da6) entered forwarding state Apr 14 00:19:05.046269 kernel: cni0: port 1(vethabfd9da6) entered disabled state Apr 14 00:19:05.044783 systemd-networkd[1397]: vethabfd9da6: Link UP Apr 14 00:19:05.060746 kernel: cni0: port 1(vethabfd9da6) entered blocking state Apr 14 00:19:05.060905 kernel: cni0: port 1(vethabfd9da6) entered forwarding state Apr 14 00:19:05.061357 systemd-networkd[1397]: vethabfd9da6: Gained carrier Apr 14 00:19:05.062501 systemd-networkd[1397]: cni0: Gained carrier Apr 14 00:19:05.128319 containerd[1477]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000012440), "name":"cbr0", "type":"bridge"} Apr 14 00:19:05.128319 containerd[1477]: delegateAdd: netconf sent to delegate plugin: Apr 14 00:19:05.468931 containerd[1477]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-14T00:19:05.465908535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:19:05.468931 containerd[1477]: time="2026-04-14T00:19:05.466632565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:19:05.468931 containerd[1477]: time="2026-04-14T00:19:05.466704755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:19:05.471620 containerd[1477]: time="2026-04-14T00:19:05.471074753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:19:05.577730 systemd[1]: Started cri-containerd-08ff7228562f0c645e4d721890c9aea2a99acb48d3412c651db9ea14c0c7aa24.scope - libcontainer container 08ff7228562f0c645e4d721890c9aea2a99acb48d3412c651db9ea14c0c7aa24. Apr 14 00:19:05.661679 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:19:05.796723 kubelet[2624]: E0414 00:19:05.793823 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:05.901462 containerd[1477]: time="2026-04-14T00:19:05.899656534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5jkt9,Uid:e8e485b7-c44f-406d-8c8d-2dec1814875c,Namespace:kube-system,Attempt:0,} returns sandbox id \"08ff7228562f0c645e4d721890c9aea2a99acb48d3412c651db9ea14c0c7aa24\"" Apr 14 00:19:05.936529 kubelet[2624]: E0414 00:19:05.936350 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:06.078947 containerd[1477]: time="2026-04-14T00:19:06.078367854Z" level=info msg="CreateContainer within sandbox \"08ff7228562f0c645e4d721890c9aea2a99acb48d3412c651db9ea14c0c7aa24\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:19:06.165540 containerd[1477]: time="2026-04-14T00:19:06.164750132Z" level=info msg="CreateContainer within sandbox \"08ff7228562f0c645e4d721890c9aea2a99acb48d3412c651db9ea14c0c7aa24\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b944c0bb799cee901b13b75c686ecfe7c1477e66454c2532f3fc91c55a91dab\"" Apr 14 00:19:06.207144 containerd[1477]: time="2026-04-14T00:19:06.203963563Z" level=info msg="StartContainer for \"1b944c0bb799cee901b13b75c686ecfe7c1477e66454c2532f3fc91c55a91dab\"" Apr 14 00:19:06.245488 systemd-networkd[1397]: cni0: Gained IPv6LL Apr 14 00:19:06.485836 systemd[1]: Started cri-containerd-1b944c0bb799cee901b13b75c686ecfe7c1477e66454c2532f3fc91c55a91dab.scope - libcontainer container 1b944c0bb799cee901b13b75c686ecfe7c1477e66454c2532f3fc91c55a91dab. Apr 14 00:19:06.706637 containerd[1477]: time="2026-04-14T00:19:06.705943209Z" level=info msg="StartContainer for \"1b944c0bb799cee901b13b75c686ecfe7c1477e66454c2532f3fc91c55a91dab\" returns successfully" Apr 14 00:19:07.075363 systemd-networkd[1397]: vethabfd9da6: Gained IPv6LL Apr 14 00:19:07.507859 kubelet[2624]: E0414 00:19:07.507755 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:08.529367 kubelet[2624]: E0414 00:19:08.529122 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:08.848086 kubelet[2624]: E0414 00:19:08.847901 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:08.853870 containerd[1477]: time="2026-04-14T00:19:08.853755866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6wnjd,Uid:07ec18f5-ad32-4be0-9206-65cfe5d2a0f3,Namespace:kube-system,Attempt:0,}" Apr 14 00:19:09.031347 systemd-networkd[1397]: vethee4d7747: Link UP Apr 14 00:19:09.038845 kernel: cni0: port 2(vethee4d7747) entered blocking state Apr 14 00:19:09.039061 kernel: cni0: port 2(vethee4d7747) entered disabled state Apr 14 00:19:09.042552 kernel: vethee4d7747: entered allmulticast mode Apr 14 00:19:09.045493 kernel: vethee4d7747: entered promiscuous mode Apr 14 00:19:09.080487 kernel: cni0: port 2(vethee4d7747) entered blocking state Apr 14 00:19:09.080671 kernel: cni0: port 2(vethee4d7747) entered forwarding state Apr 14 00:19:09.081677 systemd-networkd[1397]: vethee4d7747: Gained carrier Apr 14 00:19:09.149754 containerd[1477]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000102940), "name":"cbr0", "type":"bridge"} Apr 14 00:19:09.149754 containerd[1477]: delegateAdd: netconf sent to delegate plugin: Apr 14 00:19:09.488724 containerd[1477]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-14T00:19:09.487802043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:19:09.489004 containerd[1477]: time="2026-04-14T00:19:09.488957616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:19:09.489040 containerd[1477]: time="2026-04-14T00:19:09.489022133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:19:09.491231 containerd[1477]: time="2026-04-14T00:19:09.490238421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:19:09.532762 kubelet[2624]: E0414 00:19:09.532624 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:09.615098 systemd[1]: Started cri-containerd-63d6d2a148c096c238cd2ebe05948c1a4eb1bdceff274c1873107b644f439a18.scope - libcontainer container 63d6d2a148c096c238cd2ebe05948c1a4eb1bdceff274c1873107b644f439a18. Apr 14 00:19:09.669499 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:19:09.908581 containerd[1477]: time="2026-04-14T00:19:09.908462515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6wnjd,Uid:07ec18f5-ad32-4be0-9206-65cfe5d2a0f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"63d6d2a148c096c238cd2ebe05948c1a4eb1bdceff274c1873107b644f439a18\"" Apr 14 00:19:09.918049 kubelet[2624]: E0414 00:19:09.915897 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:09.943542 containerd[1477]: time="2026-04-14T00:19:09.943467911Z" level=info msg="CreateContainer within sandbox \"63d6d2a148c096c238cd2ebe05948c1a4eb1bdceff274c1873107b644f439a18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:19:10.065344 containerd[1477]: time="2026-04-14T00:19:10.064344791Z" level=info msg="CreateContainer within sandbox \"63d6d2a148c096c238cd2ebe05948c1a4eb1bdceff274c1873107b644f439a18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eaff288f8c4b7aa00bc4dfb3c5e3b6b45562ba7c57995871754673ed7c0f9e64\"" Apr 14 00:19:10.108649 containerd[1477]: time="2026-04-14T00:19:10.107005986Z" level=info msg="StartContainer for \"eaff288f8c4b7aa00bc4dfb3c5e3b6b45562ba7c57995871754673ed7c0f9e64\"" Apr 14 00:19:10.234093 kubelet[2624]: I0414 00:19:10.224195 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5jkt9" podStartSLOduration=36.224176961 podStartE2EDuration="36.224176961s" podCreationTimestamp="2026-04-14 00:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:19:09.082385199 +0000 UTC m=+84.982147398" watchObservedRunningTime="2026-04-14 00:19:10.224176961 +0000 UTC m=+86.123939156" Apr 14 00:19:10.434028 systemd[1]: Started cri-containerd-eaff288f8c4b7aa00bc4dfb3c5e3b6b45562ba7c57995871754673ed7c0f9e64.scope - libcontainer container eaff288f8c4b7aa00bc4dfb3c5e3b6b45562ba7c57995871754673ed7c0f9e64. Apr 14 00:19:10.709987 containerd[1477]: time="2026-04-14T00:19:10.708128491Z" level=info msg="StartContainer for \"eaff288f8c4b7aa00bc4dfb3c5e3b6b45562ba7c57995871754673ed7c0f9e64\" returns successfully" Apr 14 00:19:10.730367 systemd-networkd[1397]: vethee4d7747: Gained IPv6LL Apr 14 00:19:11.624722 kubelet[2624]: E0414 00:19:11.624626 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:12.683587 kubelet[2624]: E0414 00:19:12.679133 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:13.691977 kubelet[2624]: E0414 00:19:13.691734 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:14.013957 kubelet[2624]: I0414 00:19:14.013670 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6wnjd" podStartSLOduration=41.013634963 podStartE2EDuration="41.013634963s" podCreationTimestamp="2026-04-14 00:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:19:13.240736387 +0000 UTC m=+89.140498577" watchObservedRunningTime="2026-04-14 00:19:14.013634963 +0000 UTC m=+89.913397168" Apr 14 00:19:50.813368 kubelet[2624]: E0414 00:19:50.806388 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:19:58.801226 kubelet[2624]: E0414 00:19:58.801076 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:20:12.798644 kubelet[2624]: E0414 00:20:12.798126 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:20:19.826341 kubelet[2624]: E0414 00:20:19.825845 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:20:23.797234 kubelet[2624]: E0414 00:20:23.796841 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:20:26.779288 kubelet[2624]: E0414 00:20:26.779191 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:20:34.778353 kubelet[2624]: E0414 00:20:34.778277 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:20:42.449238 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:60992.service - OpenSSH per-connection server daemon (10.0.0.1:60992). Apr 14 00:20:42.626852 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 60992 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:20:42.634851 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:20:42.665393 systemd-logind[1460]: New session 6 of user core. Apr 14 00:20:42.677127 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 00:20:43.782906 sshd[3953]: pam_unix(sshd:session): session closed for user core Apr 14 00:20:43.796988 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:60992.service: Deactivated successfully. Apr 14 00:20:43.808820 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 00:20:43.822504 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Apr 14 00:20:43.827969 systemd-logind[1460]: Removed session 6. Apr 14 00:20:48.892315 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:59762.service - OpenSSH per-connection server daemon (10.0.0.1:59762). Apr 14 00:20:49.205938 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 59762 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:20:49.212212 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:20:49.272235 systemd-logind[1460]: New session 7 of user core. Apr 14 00:20:49.288398 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 00:20:50.361134 sshd[3999]: pam_unix(sshd:session): session closed for user core Apr 14 00:20:50.387796 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:59762.service: Deactivated successfully. Apr 14 00:20:50.412846 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 00:20:50.421242 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Apr 14 00:20:50.433103 systemd-logind[1460]: Removed session 7. Apr 14 00:20:55.442805 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:60484.service - OpenSSH per-connection server daemon (10.0.0.1:60484). Apr 14 00:20:55.634254 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 60484 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:20:55.711060 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:20:55.771148 systemd-logind[1460]: New session 8 of user core. Apr 14 00:20:55.786344 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 00:20:55.793645 kubelet[2624]: E0414 00:20:55.789426 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:20:56.787398 sshd[4048]: pam_unix(sshd:session): session closed for user core Apr 14 00:20:56.851583 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:60484.service: Deactivated successfully. Apr 14 00:20:56.865361 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 00:20:56.872266 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Apr 14 00:20:56.874923 systemd-logind[1460]: Removed session 8. Apr 14 00:21:01.891358 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:60492.service - OpenSSH per-connection server daemon (10.0.0.1:60492). Apr 14 00:21:02.064099 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 60492 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:21:02.073932 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:21:02.120398 systemd-logind[1460]: New session 9 of user core. Apr 14 00:21:02.145160 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 00:21:03.788329 sshd[4087]: pam_unix(sshd:session): session closed for user core Apr 14 00:21:03.810235 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Apr 14 00:21:03.816128 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:60492.service: Deactivated successfully. Apr 14 00:21:03.830949 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 00:21:03.832233 systemd[1]: session-9.scope: Consumed 1.033s CPU time. Apr 14 00:21:03.841060 systemd-logind[1460]: Removed session 9. Apr 14 00:21:08.933529 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:44376.service - OpenSSH per-connection server daemon (10.0.0.1:44376). Apr 14 00:21:09.327153 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 44376 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:21:09.350054 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:21:09.406392 systemd-logind[1460]: New session 10 of user core. Apr 14 00:21:09.505508 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 00:21:11.416251 sshd[4129]: pam_unix(sshd:session): session closed for user core Apr 14 00:21:11.500925 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:44376.service: Deactivated successfully. Apr 14 00:21:11.513372 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 00:21:11.515286 systemd[1]: session-10.scope: Consumed 1.152s CPU time. Apr 14 00:21:11.520377 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Apr 14 00:21:11.527430 systemd-logind[1460]: Removed session 10. Apr 14 00:21:14.824062 kubelet[2624]: E0414 00:21:14.817306 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:21:16.533514 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:55986.service - OpenSSH per-connection server daemon (10.0.0.1:55986). Apr 14 00:21:16.903195 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 55986 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:21:16.932108 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:21:17.011701 systemd-logind[1460]: New session 11 of user core. Apr 14 00:21:17.107020 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 00:21:18.756487 sshd[4180]: pam_unix(sshd:session): session closed for user core Apr 14 00:21:18.774133 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:55986.service: Deactivated successfully. Apr 14 00:21:18.789256 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 00:21:18.809450 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Apr 14 00:21:18.825322 systemd-logind[1460]: Removed session 11. Apr 14 00:21:20.781226 kubelet[2624]: E0414 00:21:20.781061 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:21:23.914643 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:56002.service - OpenSSH per-connection server daemon (10.0.0.1:56002). Apr 14 00:21:24.203909 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 56002 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:21:24.233191 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:21:24.388574 systemd-logind[1460]: New session 12 of user core. Apr 14 00:21:24.443187 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 00:21:26.530275 sshd[4222]: pam_unix(sshd:session): session closed for user core Apr 14 00:21:26.611727 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:56002.service: Deactivated successfully. Apr 14 00:21:26.620088 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 00:21:26.623225 systemd[1]: session-12.scope: Consumed 1.455s CPU time. Apr 14 00:21:26.640471 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Apr 14 00:21:26.688780 systemd-logind[1460]: Removed session 12. Apr 14 00:21:27.825091 kubelet[2624]: E0414 00:21:27.823044 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:21:31.704558 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:59426.service - OpenSSH per-connection server daemon (10.0.0.1:59426). Apr 14 00:21:32.148799 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 59426 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:21:32.172217 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:21:32.311009 systemd-logind[1460]: New session 13 of user core. Apr 14 00:21:32.328493 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 00:21:34.730093 sshd[4257]: pam_unix(sshd:session): session closed for user core Apr 14 00:21:34.838356 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:59426.service: Deactivated successfully. Apr 14 00:21:34.843067 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 00:21:34.844056 systemd[1]: session-13.scope: Consumed 1.538s CPU time. Apr 14 00:21:34.862998 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Apr 14 00:21:34.876968 systemd-logind[1460]: Removed session 13. Apr 14 00:21:36.804168 kubelet[2624]: E0414 00:21:36.802994 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:21:39.877764 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:50210.service - OpenSSH per-connection server daemon (10.0.0.1:50210). Apr 14 00:21:40.485070 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 50210 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:21:40.502558 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:21:40.659241 systemd-logind[1460]: New session 14 of user core. Apr 14 00:21:40.729036 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 00:21:43.060520 sshd[4313]: pam_unix(sshd:session): session closed for user core Apr 14 00:21:43.109795 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:50210.service: Deactivated successfully. Apr 14 00:21:43.119289 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 00:21:43.130367 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Apr 14 00:21:43.146667 systemd-logind[1460]: Removed session 14. Apr 14 00:21:48.236812 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:45158.service - OpenSSH per-connection server daemon (10.0.0.1:45158). Apr 14 00:21:48.864724 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 45158 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:21:48.883955 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:21:49.013975 systemd-logind[1460]: New session 15 of user core. Apr 14 00:21:49.062959 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 00:21:52.621384 sshd[4353]: pam_unix(sshd:session): session closed for user core Apr 14 00:21:52.723852 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Apr 14 00:21:52.728657 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:45158.service: Deactivated successfully. Apr 14 00:21:52.756662 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 00:21:52.756930 systemd[1]: session-15.scope: Consumed 2.266s CPU time. Apr 14 00:21:52.764813 systemd-logind[1460]: Removed session 15. Apr 14 00:21:55.790988 kubelet[2624]: E0414 00:21:55.790726 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:21:55.800614 kubelet[2624]: E0414 00:21:55.800287 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:21:57.773583 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:32974.service - OpenSSH per-connection server daemon (10.0.0.1:32974). Apr 14 00:21:58.506877 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 32974 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:21:58.511853 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:21:58.674883 systemd-logind[1460]: New session 16 of user core. Apr 14 00:21:58.721885 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 00:22:00.739931 sshd[4409]: pam_unix(sshd:session): session closed for user core Apr 14 00:22:00.777387 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:32974.service: Deactivated successfully. Apr 14 00:22:00.822965 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 00:22:00.827728 systemd[1]: session-16.scope: Consumed 1.060s CPU time. Apr 14 00:22:00.846965 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Apr 14 00:22:00.896963 systemd-logind[1460]: Removed session 16. Apr 14 00:22:05.902599 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:53342.service - OpenSSH per-connection server daemon (10.0.0.1:53342). Apr 14 00:22:06.463329 sshd[4464]: Accepted publickey for core from 10.0.0.1 port 53342 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:22:06.512899 sshd[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:22:06.550314 systemd-logind[1460]: New session 17 of user core. Apr 14 00:22:06.598896 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 00:22:07.799720 sshd[4464]: pam_unix(sshd:session): session closed for user core Apr 14 00:22:07.914653 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:53342.service: Deactivated successfully. Apr 14 00:22:07.940171 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 00:22:07.959661 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Apr 14 00:22:07.969554 systemd-logind[1460]: Removed session 17. Apr 14 00:22:12.935029 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:53344.service - OpenSSH per-connection server daemon (10.0.0.1:53344). Apr 14 00:22:13.324073 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 53344 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:22:13.341188 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:22:13.392704 systemd-logind[1460]: New session 18 of user core. Apr 14 00:22:13.414062 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 00:22:14.788051 kubelet[2624]: E0414 00:22:14.787874 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:22:15.598869 sshd[4501]: pam_unix(sshd:session): session closed for user core Apr 14 00:22:15.625700 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:53344.service: Deactivated successfully. Apr 14 00:22:15.668176 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 00:22:15.672512 systemd[1]: session-18.scope: Consumed 1.411s CPU time. Apr 14 00:22:15.693960 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Apr 14 00:22:15.738940 systemd-logind[1460]: Removed session 18. Apr 14 00:22:19.785997 kubelet[2624]: E0414 00:22:19.785905 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:22:20.793748 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:58788.service - OpenSSH per-connection server daemon (10.0.0.1:58788). Apr 14 00:22:21.073873 sshd[4542]: Accepted publickey for core from 10.0.0.1 port 58788 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:22:21.112038 sshd[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:22:21.248586 systemd-logind[1460]: New session 19 of user core. Apr 14 00:22:21.382918 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 00:22:23.930642 sshd[4542]: pam_unix(sshd:session): session closed for user core Apr 14 00:22:24.009513 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Apr 14 00:22:24.011892 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:58788.service: Deactivated successfully. Apr 14 00:22:24.024340 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 00:22:24.027114 systemd[1]: session-19.scope: Consumed 1.222s CPU time. Apr 14 00:22:24.050373 systemd-logind[1460]: Removed session 19. Apr 14 00:22:29.198458 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:44198.service - OpenSSH per-connection server daemon (10.0.0.1:44198). Apr 14 00:22:29.679836 sshd[4592]: Accepted publickey for core from 10.0.0.1 port 44198 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:22:29.686674 sshd[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:22:29.900179 systemd-logind[1460]: New session 20 of user core. Apr 14 00:22:29.958395 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 00:22:31.626636 sshd[4592]: pam_unix(sshd:session): session closed for user core Apr 14 00:22:31.820601 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:44198.service: Deactivated successfully. Apr 14 00:22:31.875462 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 00:22:31.980694 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Apr 14 00:22:31.992062 systemd-logind[1460]: Removed session 20. Apr 14 00:22:36.777849 kubelet[2624]: E0414 00:22:36.777514 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:22:36.780318 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:34748.service - OpenSSH per-connection server daemon (10.0.0.1:34748). Apr 14 00:22:37.060115 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 34748 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:22:37.060387 sshd[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:22:37.242126 systemd-logind[1460]: New session 21 of user core. Apr 14 00:22:37.301941 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 00:22:38.793193 sshd[4634]: pam_unix(sshd:session): session closed for user core Apr 14 00:22:38.868143 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:34748.service: Deactivated successfully. Apr 14 00:22:38.917108 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 00:22:38.930906 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Apr 14 00:22:38.936673 systemd-logind[1460]: Removed session 21. Apr 14 00:22:41.790474 kubelet[2624]: E0414 00:22:41.787138 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:22:43.897242 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:34762.service - OpenSSH per-connection server daemon (10.0.0.1:34762). Apr 14 00:22:44.522851 sshd[4685]: Accepted publickey for core from 10.0.0.1 port 34762 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:22:44.544999 sshd[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:22:44.579450 systemd-logind[1460]: New session 22 of user core. Apr 14 00:22:44.620256 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 00:22:45.766091 sshd[4685]: pam_unix(sshd:session): session closed for user core Apr 14 00:22:45.830805 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:34762.service: Deactivated successfully. Apr 14 00:22:45.855669 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 00:22:45.885635 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Apr 14 00:22:45.924175 systemd-logind[1460]: Removed session 22. Apr 14 00:22:48.825026 kubelet[2624]: E0414 00:22:48.824688 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:22:50.908919 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:40294.service - OpenSSH per-connection server daemon (10.0.0.1:40294). Apr 14 00:22:51.322744 sshd[4723]: Accepted publickey for core from 10.0.0.1 port 40294 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:22:51.396310 sshd[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:22:51.501767 systemd-logind[1460]: New session 23 of user core. Apr 14 00:22:51.601591 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 00:22:53.188229 sshd[4723]: pam_unix(sshd:session): session closed for user core Apr 14 00:22:53.308355 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Apr 14 00:22:53.318197 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:40294.service: Deactivated successfully. Apr 14 00:22:53.343118 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 00:22:53.367177 systemd-logind[1460]: Removed session 23. Apr 14 00:22:58.359247 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:49836.service - OpenSSH per-connection server daemon (10.0.0.1:49836). Apr 14 00:22:59.034714 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 49836 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:22:59.048173 sshd[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:22:59.207882 systemd-logind[1460]: New session 24 of user core. Apr 14 00:22:59.307050 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 00:23:01.571253 sshd[4764]: pam_unix(sshd:session): session closed for user core Apr 14 00:23:01.666616 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. Apr 14 00:23:01.675663 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:49836.service: Deactivated successfully. Apr 14 00:23:01.744842 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 00:23:01.747284 systemd[1]: session-24.scope: Consumed 1.279s CPU time. Apr 14 00:23:01.822120 systemd-logind[1460]: Removed session 24. Apr 14 00:23:05.807611 kubelet[2624]: E0414 00:23:05.802880 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:23:06.718920 systemd[1]: Started sshd@24-10.0.0.74:22-10.0.0.1:35202.service - OpenSSH per-connection server daemon (10.0.0.1:35202). Apr 14 00:23:07.132485 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 35202 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:23:07.145224 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:23:07.262134 systemd-logind[1460]: New session 25 of user core. Apr 14 00:23:07.348361 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 00:23:10.030936 sshd[4818]: pam_unix(sshd:session): session closed for user core Apr 14 00:23:10.113174 systemd[1]: sshd@24-10.0.0.74:22-10.0.0.1:35202.service: Deactivated successfully. Apr 14 00:23:10.140660 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 00:23:10.141052 systemd[1]: session-25.scope: Consumed 1.608s CPU time. Apr 14 00:23:10.148622 systemd-logind[1460]: Session 25 logged out. Waiting for processes to exit. Apr 14 00:23:10.177970 systemd-logind[1460]: Removed session 25. Apr 14 00:23:13.798705 kubelet[2624]: E0414 00:23:13.797147 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:23:15.198328 systemd[1]: Started sshd@25-10.0.0.74:22-10.0.0.1:35212.service - OpenSSH per-connection server daemon (10.0.0.1:35212). Apr 14 00:23:15.521167 sshd[4858]: Accepted publickey for core from 10.0.0.1 port 35212 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:23:15.526604 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:23:15.619678 systemd-logind[1460]: New session 26 of user core. Apr 14 00:23:15.645616 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 00:23:17.245187 sshd[4858]: pam_unix(sshd:session): session closed for user core Apr 14 00:23:17.281314 systemd[1]: sshd@25-10.0.0.74:22-10.0.0.1:35212.service: Deactivated successfully. Apr 14 00:23:17.311860 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 00:23:17.315967 systemd-logind[1460]: Session 26 logged out. Waiting for processes to exit. Apr 14 00:23:17.320334 systemd-logind[1460]: Removed session 26. Apr 14 00:23:22.487724 systemd[1]: Started sshd@26-10.0.0.74:22-10.0.0.1:39856.service - OpenSSH per-connection server daemon (10.0.0.1:39856). Apr 14 00:23:23.010690 sshd[4914]: Accepted publickey for core from 10.0.0.1 port 39856 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:23:23.041038 sshd[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:23:23.142665 systemd-logind[1460]: New session 27 of user core. Apr 14 00:23:23.233328 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 00:23:25.728689 sshd[4914]: pam_unix(sshd:session): session closed for user core Apr 14 00:23:25.801699 systemd[1]: sshd@26-10.0.0.74:22-10.0.0.1:39856.service: Deactivated successfully. Apr 14 00:23:25.852819 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 00:23:25.864123 systemd[1]: session-27.scope: Consumed 1.459s CPU time. Apr 14 00:23:25.901253 systemd-logind[1460]: Session 27 logged out. Waiting for processes to exit. Apr 14 00:23:25.917394 systemd-logind[1460]: Removed session 27. Apr 14 00:23:25.988758 update_engine[1462]: I20260414 00:23:25.987797 1462 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 14 00:23:25.988758 update_engine[1462]: I20260414 00:23:25.987866 1462 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 14 00:23:25.988758 update_engine[1462]: I20260414 00:23:25.988656 1462 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 14 00:23:25.999765 update_engine[1462]: I20260414 00:23:25.999383 1462 omaha_request_params.cc:62] Current group set to lts Apr 14 00:23:26.011188 update_engine[1462]: I20260414 00:23:26.002923 1462 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 14 00:23:26.011188 update_engine[1462]: I20260414 00:23:26.003046 1462 update_attempter.cc:643] Scheduling an action processor start. Apr 14 00:23:26.011188 update_engine[1462]: I20260414 00:23:26.003083 1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 00:23:26.011188 update_engine[1462]: I20260414 00:23:26.003189 1462 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 14 00:23:26.011188 update_engine[1462]: I20260414 00:23:26.003283 1462 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 00:23:26.011188 update_engine[1462]: I20260414 00:23:26.003289 1462 omaha_request_action.cc:272] Request: Apr 14 00:23:26.011188 update_engine[1462]: Apr 14 00:23:26.011188 update_engine[1462]: Apr 14 00:23:26.011188 update_engine[1462]: Apr 14 00:23:26.011188 update_engine[1462]: Apr 14 00:23:26.011188 update_engine[1462]: Apr 14 00:23:26.011188 update_engine[1462]: Apr 14 00:23:26.011188 update_engine[1462]: Apr 14 00:23:26.011188 update_engine[1462]: Apr 14 00:23:26.011188 update_engine[1462]: I20260414 00:23:26.003296 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:23:26.018907 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 14 00:23:26.022078 update_engine[1462]: I20260414 00:23:26.020077 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:23:26.022078 update_engine[1462]: I20260414 00:23:26.021080 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:23:26.040578 update_engine[1462]: E20260414 00:23:26.039997 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:23:26.040578 update_engine[1462]: I20260414 00:23:26.040302 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 14 00:23:30.845131 kubelet[2624]: E0414 00:23:30.843970 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:23:31.006270 systemd[1]: Started sshd@27-10.0.0.74:22-10.0.0.1:42844.service - OpenSSH per-connection server daemon (10.0.0.1:42844). Apr 14 00:23:31.429045 sshd[4949]: Accepted publickey for core from 10.0.0.1 port 42844 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:23:31.461044 sshd[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:23:31.603247 systemd-logind[1460]: New session 28 of user core. Apr 14 00:23:31.623999 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 00:23:34.656320 sshd[4949]: pam_unix(sshd:session): session closed for user core Apr 14 00:23:34.719901 systemd[1]: sshd@27-10.0.0.74:22-10.0.0.1:42844.service: Deactivated successfully. Apr 14 00:23:34.743303 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 00:23:34.748967 systemd[1]: session-28.scope: Consumed 1.710s CPU time. Apr 14 00:23:34.814992 systemd-logind[1460]: Session 28 logged out. Waiting for processes to exit. Apr 14 00:23:34.818166 kubelet[2624]: E0414 00:23:34.815748 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:23:34.820297 systemd-logind[1460]: Removed session 28. Apr 14 00:23:35.973787 update_engine[1462]: I20260414 00:23:35.968888 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:23:35.980322 update_engine[1462]: I20260414 00:23:35.975083 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:23:35.985779 update_engine[1462]: I20260414 00:23:35.983033 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:23:36.017933 update_engine[1462]: E20260414 00:23:36.017648 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:23:36.019511 update_engine[1462]: I20260414 00:23:36.019069 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 14 00:23:39.734058 systemd[1]: Started sshd@28-10.0.0.74:22-10.0.0.1:56256.service - OpenSSH per-connection server daemon (10.0.0.1:56256). Apr 14 00:23:40.443174 sshd[5005]: Accepted publickey for core from 10.0.0.1 port 56256 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:23:40.466873 sshd[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:23:40.626254 systemd-logind[1460]: New session 29 of user core. Apr 14 00:23:40.660459 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 00:23:42.621127 sshd[5005]: pam_unix(sshd:session): session closed for user core Apr 14 00:23:42.672981 systemd[1]: sshd@28-10.0.0.74:22-10.0.0.1:56256.service: Deactivated successfully. Apr 14 00:23:42.715249 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 00:23:42.716837 systemd[1]: session-29.scope: Consumed 1.063s CPU time. Apr 14 00:23:42.724949 systemd-logind[1460]: Session 29 logged out. Waiting for processes to exit. Apr 14 00:23:42.739465 systemd-logind[1460]: Removed session 29. Apr 14 00:23:45.971718 update_engine[1462]: I20260414 00:23:45.970712 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:23:45.977055 update_engine[1462]: I20260414 00:23:45.974933 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:23:45.977381 update_engine[1462]: I20260414 00:23:45.977176 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:23:45.985581 update_engine[1462]: E20260414 00:23:45.985159 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:23:45.986033 update_engine[1462]: I20260414 00:23:45.985872 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 14 00:23:47.813445 systemd[1]: Started sshd@29-10.0.0.74:22-10.0.0.1:44014.service - OpenSSH per-connection server daemon (10.0.0.1:44014). Apr 14 00:23:48.140673 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 44014 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:23:48.243145 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:23:48.323751 systemd-logind[1460]: New session 30 of user core. Apr 14 00:23:48.440045 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 00:23:50.664296 sshd[5060]: pam_unix(sshd:session): session closed for user core Apr 14 00:23:50.825557 systemd[1]: sshd@29-10.0.0.74:22-10.0.0.1:44014.service: Deactivated successfully. Apr 14 00:23:50.896808 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 00:23:50.897823 systemd[1]: session-30.scope: Consumed 1.173s CPU time. Apr 14 00:23:50.919215 systemd-logind[1460]: Session 30 logged out. Waiting for processes to exit. Apr 14 00:23:50.926812 systemd-logind[1460]: Removed session 30. Apr 14 00:23:55.749371 systemd[1]: Started sshd@30-10.0.0.74:22-10.0.0.1:56366.service - OpenSSH per-connection server daemon (10.0.0.1:56366). Apr 14 00:23:56.002576 update_engine[1462]: I20260414 00:23:55.995834 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:23:56.002576 update_engine[1462]: I20260414 00:23:56.001732 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:23:56.008342 update_engine[1462]: I20260414 00:23:56.008248 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:23:56.016657 update_engine[1462]: E20260414 00:23:56.012113 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012253 1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012264 1462 omaha_request_action.cc:617] Omaha request response: Apr 14 00:23:56.016657 update_engine[1462]: E20260414 00:23:56.012599 1462 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012658 1462 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012665 1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012670 1462 update_attempter.cc:306] Processing Done. Apr 14 00:23:56.016657 update_engine[1462]: E20260414 00:23:56.012709 1462 update_attempter.cc:619] Update failed. Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012715 1462 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012722 1462 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012727 1462 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012825 1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012899 1462 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 00:23:56.016657 update_engine[1462]: I20260414 00:23:56.012910 1462 omaha_request_action.cc:272] Request: Apr 14 00:23:56.016657 update_engine[1462]: Apr 14 00:23:56.016657 update_engine[1462]: Apr 14 00:23:56.016657 update_engine[1462]: Apr 14 00:23:56.022049 update_engine[1462]: Apr 14 00:23:56.022049 update_engine[1462]: Apr 14 00:23:56.022049 update_engine[1462]: Apr 14 00:23:56.022049 update_engine[1462]: I20260414 00:23:56.012916 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:23:56.022049 update_engine[1462]: I20260414 00:23:56.013242 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:23:56.022049 update_engine[1462]: I20260414 00:23:56.013804 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:23:56.023157 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 14 00:23:56.034007 update_engine[1462]: E20260414 00:23:56.033944 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:23:56.039017 update_engine[1462]: I20260414 00:23:56.036259 1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 00:23:56.039017 update_engine[1462]: I20260414 00:23:56.036329 1462 omaha_request_action.cc:617] Omaha request response: Apr 14 00:23:56.039017 update_engine[1462]: I20260414 00:23:56.036341 1462 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 00:23:56.039017 update_engine[1462]: I20260414 00:23:56.036346 1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 00:23:56.039017 update_engine[1462]: I20260414 00:23:56.036352 1462 update_attempter.cc:306] Processing Done. Apr 14 00:23:56.039017 update_engine[1462]: I20260414 00:23:56.036362 1462 update_attempter.cc:310] Error event sent. Apr 14 00:23:56.039017 update_engine[1462]: I20260414 00:23:56.036387 1462 update_check_scheduler.cc:74] Next update check in 41m14s Apr 14 00:23:56.044014 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 14 00:23:56.194312 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 56366 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:23:56.243208 sshd[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:23:56.396082 systemd-logind[1460]: New session 31 of user core. Apr 14 00:23:56.442375 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 00:23:58.301933 sshd[5103]: pam_unix(sshd:session): session closed for user core Apr 14 00:23:58.313161 systemd[1]: sshd@30-10.0.0.74:22-10.0.0.1:56366.service: Deactivated successfully. Apr 14 00:23:58.329365 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 00:23:58.347808 systemd-logind[1460]: Session 31 logged out. Waiting for processes to exit. Apr 14 00:23:58.412344 systemd-logind[1460]: Removed session 31. Apr 14 00:24:03.437444 systemd[1]: Started sshd@31-10.0.0.74:22-10.0.0.1:56376.service - OpenSSH per-connection server daemon (10.0.0.1:56376). Apr 14 00:24:03.782169 sshd[5139]: Accepted publickey for core from 10.0.0.1 port 56376 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:24:03.802604 sshd[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:24:03.863495 systemd-logind[1460]: New session 32 of user core. Apr 14 00:24:03.907621 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 14 00:24:04.833814 kubelet[2624]: E0414 00:24:04.833390 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:24:05.240922 sshd[5139]: pam_unix(sshd:session): session closed for user core Apr 14 00:24:05.319594 systemd[1]: sshd@31-10.0.0.74:22-10.0.0.1:56376.service: Deactivated successfully. Apr 14 00:24:05.348046 systemd[1]: session-32.scope: Deactivated successfully. Apr 14 00:24:05.384786 systemd-logind[1460]: Session 32 logged out. Waiting for processes to exit. Apr 14 00:24:05.395166 systemd-logind[1460]: Removed session 32. Apr 14 00:24:05.790561 kubelet[2624]: E0414 00:24:05.789061 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:24:10.434458 systemd[1]: Started sshd@32-10.0.0.74:22-10.0.0.1:60594.service - OpenSSH per-connection server daemon (10.0.0.1:60594). Apr 14 00:24:11.122847 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 60594 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:24:11.147532 sshd[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:24:11.325330 systemd-logind[1460]: New session 33 of user core. Apr 14 00:24:11.401732 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 14 00:24:11.778536 kubelet[2624]: E0414 00:24:11.778078 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:24:13.814801 kubelet[2624]: E0414 00:24:13.813300 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:24:14.095072 sshd[5196]: pam_unix(sshd:session): session closed for user core Apr 14 00:24:14.137583 systemd-logind[1460]: Session 33 logged out. Waiting for processes to exit. Apr 14 00:24:14.145296 systemd[1]: sshd@32-10.0.0.74:22-10.0.0.1:60594.service: Deactivated successfully. Apr 14 00:24:14.206380 systemd[1]: session-33.scope: Deactivated successfully. Apr 14 00:24:14.207348 systemd[1]: session-33.scope: Consumed 1.670s CPU time. Apr 14 00:24:14.228636 systemd-logind[1460]: Removed session 33. Apr 14 00:24:19.217651 systemd[1]: Started sshd@33-10.0.0.74:22-10.0.0.1:33644.service - OpenSSH per-connection server daemon (10.0.0.1:33644). Apr 14 00:24:19.645497 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 33644 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:24:19.669226 sshd[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:24:19.736698 systemd-logind[1460]: New session 34 of user core. Apr 14 00:24:19.893002 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 14 00:24:22.936955 sshd[5232]: pam_unix(sshd:session): session closed for user core Apr 14 00:24:23.029771 systemd[1]: sshd@33-10.0.0.74:22-10.0.0.1:33644.service: Deactivated successfully. Apr 14 00:24:23.076457 systemd[1]: session-34.scope: Deactivated successfully. Apr 14 00:24:23.079927 systemd[1]: session-34.scope: Consumed 2.220s CPU time. Apr 14 00:24:23.099266 systemd-logind[1460]: Session 34 logged out. Waiting for processes to exit. Apr 14 00:24:23.116085 systemd-logind[1460]: Removed session 34. Apr 14 00:24:28.145830 systemd[1]: Started sshd@34-10.0.0.74:22-10.0.0.1:59504.service - OpenSSH per-connection server daemon (10.0.0.1:59504). Apr 14 00:24:28.471163 sshd[5287]: Accepted publickey for core from 10.0.0.1 port 59504 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:24:28.493217 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:24:28.648047 systemd-logind[1460]: New session 35 of user core. Apr 14 00:24:28.701054 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 14 00:24:30.772200 sshd[5287]: pam_unix(sshd:session): session closed for user core Apr 14 00:24:30.816132 systemd-logind[1460]: Session 35 logged out. Waiting for processes to exit. Apr 14 00:24:30.821760 systemd[1]: sshd@34-10.0.0.74:22-10.0.0.1:59504.service: Deactivated successfully. Apr 14 00:24:30.933167 systemd[1]: session-35.scope: Deactivated successfully. Apr 14 00:24:30.934785 systemd[1]: session-35.scope: Consumed 1.205s CPU time. Apr 14 00:24:30.964976 systemd-logind[1460]: Removed session 35. Apr 14 00:24:34.831943 kubelet[2624]: E0414 00:24:34.823111 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:24:35.903116 systemd[1]: Started sshd@35-10.0.0.74:22-10.0.0.1:41996.service - OpenSSH per-connection server daemon (10.0.0.1:41996). Apr 14 00:24:36.064086 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 41996 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:24:36.071618 sshd[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:24:36.136917 systemd-logind[1460]: New session 36 of user core. Apr 14 00:24:36.269164 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 14 00:24:36.816559 kubelet[2624]: E0414 00:24:36.816085 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:24:39.320857 sshd[5335]: pam_unix(sshd:session): session closed for user core Apr 14 00:24:39.348341 systemd[1]: sshd@35-10.0.0.74:22-10.0.0.1:41996.service: Deactivated successfully. Apr 14 00:24:39.403278 systemd[1]: session-36.scope: Deactivated successfully. Apr 14 00:24:39.404852 systemd[1]: session-36.scope: Consumed 1.886s CPU time. Apr 14 00:24:39.431357 systemd-logind[1460]: Session 36 logged out. Waiting for processes to exit. Apr 14 00:24:39.439363 systemd-logind[1460]: Removed session 36. Apr 14 00:24:44.380827 systemd[1]: Started sshd@36-10.0.0.74:22-10.0.0.1:42008.service - OpenSSH per-connection server daemon (10.0.0.1:42008). Apr 14 00:24:44.822069 sshd[5381]: Accepted publickey for core from 10.0.0.1 port 42008 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:24:44.846768 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:24:44.937347 systemd-logind[1460]: New session 37 of user core. Apr 14 00:24:44.981771 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 14 00:24:46.391279 sshd[5381]: pam_unix(sshd:session): session closed for user core Apr 14 00:24:46.426330 systemd[1]: sshd@36-10.0.0.74:22-10.0.0.1:42008.service: Deactivated successfully. Apr 14 00:24:46.427261 systemd-logind[1460]: Session 37 logged out. Waiting for processes to exit. Apr 14 00:24:46.492805 systemd[1]: session-37.scope: Deactivated successfully. Apr 14 00:24:46.506170 systemd-logind[1460]: Removed session 37. Apr 14 00:24:48.794454 kubelet[2624]: E0414 00:24:48.792874 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:24:51.513961 systemd[1]: Started sshd@37-10.0.0.74:22-10.0.0.1:42522.service - OpenSSH per-connection server daemon (10.0.0.1:42522). Apr 14 00:24:51.792781 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 42522 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:24:51.820897 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:24:51.904155 systemd-logind[1460]: New session 38 of user core. Apr 14 00:24:51.933752 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 14 00:24:53.564394 sshd[5425]: pam_unix(sshd:session): session closed for user core Apr 14 00:24:53.600872 systemd[1]: sshd@37-10.0.0.74:22-10.0.0.1:42522.service: Deactivated successfully. Apr 14 00:24:53.625828 systemd[1]: session-38.scope: Deactivated successfully. Apr 14 00:24:53.636447 systemd-logind[1460]: Session 38 logged out. Waiting for processes to exit. Apr 14 00:24:53.644983 systemd-logind[1460]: Removed session 38. Apr 14 00:24:58.743194 systemd[1]: Started sshd@38-10.0.0.74:22-10.0.0.1:59870.service - OpenSSH per-connection server daemon (10.0.0.1:59870). Apr 14 00:24:59.027859 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 59870 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:24:59.107251 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:24:59.175059 systemd-logind[1460]: New session 39 of user core. Apr 14 00:24:59.209003 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 14 00:25:00.622113 sshd[5478]: pam_unix(sshd:session): session closed for user core Apr 14 00:25:00.729653 systemd[1]: sshd@38-10.0.0.74:22-10.0.0.1:59870.service: Deactivated successfully. Apr 14 00:25:00.748592 systemd[1]: session-39.scope: Deactivated successfully. Apr 14 00:25:00.751246 systemd-logind[1460]: Session 39 logged out. Waiting for processes to exit. Apr 14 00:25:00.791647 systemd[1]: Started sshd@39-10.0.0.74:22-10.0.0.1:59884.service - OpenSSH per-connection server daemon (10.0.0.1:59884). Apr 14 00:25:00.813098 systemd-logind[1460]: Removed session 39. Apr 14 00:25:01.101366 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 59884 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:25:01.116746 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:25:01.209365 systemd-logind[1460]: New session 40 of user core. Apr 14 00:25:01.293514 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 14 00:25:04.540103 sshd[5498]: pam_unix(sshd:session): session closed for user core Apr 14 00:25:04.591240 systemd[1]: sshd@39-10.0.0.74:22-10.0.0.1:59884.service: Deactivated successfully. Apr 14 00:25:04.604314 systemd[1]: session-40.scope: Deactivated successfully. Apr 14 00:25:04.605165 systemd[1]: session-40.scope: Consumed 1.696s CPU time. Apr 14 00:25:04.627821 systemd-logind[1460]: Session 40 logged out. Waiting for processes to exit. Apr 14 00:25:04.731805 systemd[1]: Started sshd@40-10.0.0.74:22-10.0.0.1:59898.service - OpenSSH per-connection server daemon (10.0.0.1:59898). Apr 14 00:25:04.744774 systemd-logind[1460]: Removed session 40. Apr 14 00:25:05.168081 sshd[5529]: Accepted publickey for core from 10.0.0.1 port 59898 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:25:05.194320 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:25:05.218517 systemd-logind[1460]: New session 41 of user core. Apr 14 00:25:05.240188 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 14 00:25:07.025128 sshd[5529]: pam_unix(sshd:session): session closed for user core Apr 14 00:25:07.053667 systemd-logind[1460]: Session 41 logged out. Waiting for processes to exit. Apr 14 00:25:07.065115 systemd[1]: sshd@40-10.0.0.74:22-10.0.0.1:59898.service: Deactivated successfully. Apr 14 00:25:07.112287 systemd[1]: session-41.scope: Deactivated successfully. Apr 14 00:25:07.114090 systemd[1]: session-41.scope: Consumed 1.014s CPU time. Apr 14 00:25:07.142859 systemd-logind[1460]: Removed session 41. Apr 14 00:25:09.807797 kubelet[2624]: E0414 00:25:09.805227 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:25:12.250006 systemd[1]: Started sshd@41-10.0.0.74:22-10.0.0.1:39060.service - OpenSSH per-connection server daemon (10.0.0.1:39060). Apr 14 00:25:12.608338 sshd[5572]: Accepted publickey for core from 10.0.0.1 port 39060 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:25:12.627869 sshd[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:25:12.683680 systemd-logind[1460]: New session 42 of user core. Apr 14 00:25:12.723367 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 14 00:25:15.312464 sshd[5572]: pam_unix(sshd:session): session closed for user core Apr 14 00:25:15.384885 systemd[1]: sshd@41-10.0.0.74:22-10.0.0.1:39060.service: Deactivated successfully. Apr 14 00:25:15.428249 systemd[1]: session-42.scope: Deactivated successfully. Apr 14 00:25:15.458359 systemd[1]: session-42.scope: Consumed 1.135s CPU time. Apr 14 00:25:15.476133 systemd-logind[1460]: Session 42 logged out. Waiting for processes to exit. Apr 14 00:25:15.495978 systemd-logind[1460]: Removed session 42. Apr 14 00:25:18.800474 kubelet[2624]: E0414 00:25:18.797771 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:25:20.417654 systemd[1]: Started sshd@42-10.0.0.74:22-10.0.0.1:34898.service - OpenSSH per-connection server daemon (10.0.0.1:34898). Apr 14 00:25:20.717195 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 34898 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:25:20.731525 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:25:20.759536 systemd-logind[1460]: New session 43 of user core. Apr 14 00:25:20.783078 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 14 00:25:22.240951 sshd[5621]: pam_unix(sshd:session): session closed for user core Apr 14 00:25:22.279134 systemd[1]: sshd@42-10.0.0.74:22-10.0.0.1:34898.service: Deactivated successfully. Apr 14 00:25:22.299474 systemd[1]: session-43.scope: Deactivated successfully. Apr 14 00:25:22.312117 systemd-logind[1460]: Session 43 logged out. Waiting for processes to exit. Apr 14 00:25:22.325534 systemd-logind[1460]: Removed session 43. Apr 14 00:25:26.795348 kubelet[2624]: E0414 00:25:26.795216 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:25:27.395081 systemd[1]: Started sshd@43-10.0.0.74:22-10.0.0.1:55250.service - OpenSSH per-connection server daemon (10.0.0.1:55250). Apr 14 00:25:27.735753 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 55250 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:25:27.783541 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:25:27.790518 kubelet[2624]: E0414 00:25:27.789007 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:25:27.896174 systemd-logind[1460]: New session 44 of user core. Apr 14 00:25:28.008220 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 14 00:25:29.429806 sshd[5661]: pam_unix(sshd:session): session closed for user core Apr 14 00:25:29.451203 systemd[1]: sshd@43-10.0.0.74:22-10.0.0.1:55250.service: Deactivated successfully. Apr 14 00:25:29.532114 systemd[1]: session-44.scope: Deactivated successfully. Apr 14 00:25:29.569308 systemd-logind[1460]: Session 44 logged out. Waiting for processes to exit. Apr 14 00:25:29.577936 systemd-logind[1460]: Removed session 44. Apr 14 00:25:34.601434 systemd[1]: Started sshd@44-10.0.0.74:22-10.0.0.1:55262.service - OpenSSH per-connection server daemon (10.0.0.1:55262). Apr 14 00:25:34.908868 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 55262 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:25:34.936160 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:25:35.011901 systemd-logind[1460]: New session 45 of user core. Apr 14 00:25:35.084485 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 14 00:25:37.112384 sshd[5709]: pam_unix(sshd:session): session closed for user core Apr 14 00:25:37.174578 systemd[1]: sshd@44-10.0.0.74:22-10.0.0.1:55262.service: Deactivated successfully. Apr 14 00:25:37.244741 systemd[1]: session-45.scope: Deactivated successfully. Apr 14 00:25:37.248505 systemd[1]: session-45.scope: Consumed 1.395s CPU time. Apr 14 00:25:37.302183 systemd-logind[1460]: Session 45 logged out. Waiting for processes to exit. Apr 14 00:25:37.310348 systemd-logind[1460]: Removed session 45. Apr 14 00:25:38.789537 kubelet[2624]: E0414 00:25:38.786100 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:25:42.188727 systemd[1]: Started sshd@45-10.0.0.74:22-10.0.0.1:58574.service - OpenSSH per-connection server daemon (10.0.0.1:58574). Apr 14 00:25:42.816934 sshd[5752]: Accepted publickey for core from 10.0.0.1 port 58574 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:25:42.883051 sshd[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:25:42.923884 systemd-logind[1460]: New session 46 of user core. Apr 14 00:25:42.939164 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 14 00:25:45.732101 sshd[5752]: pam_unix(sshd:session): session closed for user core Apr 14 00:25:45.830254 systemd-logind[1460]: Session 46 logged out. Waiting for processes to exit. Apr 14 00:25:45.836240 systemd[1]: sshd@45-10.0.0.74:22-10.0.0.1:58574.service: Deactivated successfully. Apr 14 00:25:45.867139 systemd[1]: session-46.scope: Deactivated successfully. Apr 14 00:25:45.868298 systemd[1]: session-46.scope: Consumed 1.891s CPU time. Apr 14 00:25:45.893827 systemd-logind[1460]: Removed session 46. Apr 14 00:25:50.811855 systemd[1]: Started sshd@46-10.0.0.74:22-10.0.0.1:33022.service - OpenSSH per-connection server daemon (10.0.0.1:33022). Apr 14 00:25:51.306174 sshd[5802]: Accepted publickey for core from 10.0.0.1 port 33022 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:25:51.327348 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:25:51.393226 systemd-logind[1460]: New session 47 of user core. Apr 14 00:25:51.411336 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 14 00:25:53.096990 sshd[5802]: pam_unix(sshd:session): session closed for user core Apr 14 00:25:53.137595 systemd[1]: sshd@46-10.0.0.74:22-10.0.0.1:33022.service: Deactivated successfully. Apr 14 00:25:53.222666 systemd[1]: session-47.scope: Deactivated successfully. Apr 14 00:25:53.223531 systemd[1]: session-47.scope: Consumed 1.217s CPU time. Apr 14 00:25:53.243224 systemd-logind[1460]: Session 47 logged out. Waiting for processes to exit. Apr 14 00:25:53.251114 systemd-logind[1460]: Removed session 47. Apr 14 00:25:53.817801 kubelet[2624]: E0414 00:25:53.817519 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:25:58.360845 systemd[1]: Started sshd@47-10.0.0.74:22-10.0.0.1:44016.service - OpenSSH per-connection server daemon (10.0.0.1:44016). Apr 14 00:25:58.812287 sshd[5842]: Accepted publickey for core from 10.0.0.1 port 44016 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:25:58.848331 sshd[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:25:59.029040 systemd-logind[1460]: New session 48 of user core. Apr 14 00:25:59.114106 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 14 00:26:01.640591 sshd[5842]: pam_unix(sshd:session): session closed for user core Apr 14 00:26:01.672876 systemd[1]: sshd@47-10.0.0.74:22-10.0.0.1:44016.service: Deactivated successfully. Apr 14 00:26:01.686946 systemd[1]: session-48.scope: Deactivated successfully. Apr 14 00:26:01.687850 systemd[1]: session-48.scope: Consumed 1.559s CPU time. Apr 14 00:26:01.700977 systemd-logind[1460]: Session 48 logged out. Waiting for processes to exit. Apr 14 00:26:01.707949 systemd-logind[1460]: Removed session 48. Apr 14 00:26:06.825370 systemd[1]: Started sshd@48-10.0.0.74:22-10.0.0.1:44470.service - OpenSSH per-connection server daemon (10.0.0.1:44470). Apr 14 00:26:07.080383 sshd[5890]: Accepted publickey for core from 10.0.0.1 port 44470 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:26:07.091682 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:26:07.199140 systemd-logind[1460]: New session 49 of user core. Apr 14 00:26:07.284439 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 14 00:26:08.607987 sshd[5890]: pam_unix(sshd:session): session closed for user core Apr 14 00:26:08.681297 systemd[1]: sshd@48-10.0.0.74:22-10.0.0.1:44470.service: Deactivated successfully. Apr 14 00:26:08.705962 systemd[1]: session-49.scope: Deactivated successfully. Apr 14 00:26:08.721170 systemd-logind[1460]: Session 49 logged out. Waiting for processes to exit. Apr 14 00:26:08.733004 systemd-logind[1460]: Removed session 49. Apr 14 00:26:13.765940 systemd[1]: Started sshd@49-10.0.0.74:22-10.0.0.1:44480.service - OpenSSH per-connection server daemon (10.0.0.1:44480). Apr 14 00:26:14.120828 sshd[5932]: Accepted publickey for core from 10.0.0.1 port 44480 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:26:14.142375 sshd[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:26:14.332075 systemd-logind[1460]: New session 50 of user core. Apr 14 00:26:14.403967 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 14 00:26:15.794019 kubelet[2624]: E0414 00:26:15.788205 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:26:15.972606 sshd[5932]: pam_unix(sshd:session): session closed for user core Apr 14 00:26:16.014765 systemd[1]: sshd@49-10.0.0.74:22-10.0.0.1:44480.service: Deactivated successfully. Apr 14 00:26:16.113266 systemd[1]: session-50.scope: Deactivated successfully. Apr 14 00:26:16.133617 systemd-logind[1460]: Session 50 logged out. Waiting for processes to exit. Apr 14 00:26:16.145004 systemd-logind[1460]: Removed session 50. Apr 14 00:26:21.224157 systemd[1]: Started sshd@50-10.0.0.74:22-10.0.0.1:58598.service - OpenSSH per-connection server daemon (10.0.0.1:58598). Apr 14 00:26:21.518756 sshd[5967]: Accepted publickey for core from 10.0.0.1 port 58598 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:26:21.550214 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:26:21.697915 systemd-logind[1460]: New session 51 of user core. Apr 14 00:26:21.717329 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 14 00:26:23.486707 sshd[5967]: pam_unix(sshd:session): session closed for user core Apr 14 00:26:23.506646 systemd[1]: sshd@50-10.0.0.74:22-10.0.0.1:58598.service: Deactivated successfully. Apr 14 00:26:23.534659 systemd[1]: session-51.scope: Deactivated successfully. Apr 14 00:26:23.535317 systemd[1]: session-51.scope: Consumed 1.226s CPU time. Apr 14 00:26:23.619791 systemd-logind[1460]: Session 51 logged out. Waiting for processes to exit. Apr 14 00:26:23.627281 systemd-logind[1460]: Removed session 51. Apr 14 00:26:28.586918 systemd[1]: Started sshd@51-10.0.0.74:22-10.0.0.1:40530.service - OpenSSH per-connection server daemon (10.0.0.1:40530). Apr 14 00:26:28.786700 kubelet[2624]: E0414 00:26:28.785938 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:26:28.798459 sshd[6021]: Accepted publickey for core from 10.0.0.1 port 40530 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:26:28.824384 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:26:28.898657 systemd-logind[1460]: New session 52 of user core. Apr 14 00:26:28.934018 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 14 00:26:29.730162 sshd[6021]: pam_unix(sshd:session): session closed for user core Apr 14 00:26:29.744138 systemd[1]: sshd@51-10.0.0.74:22-10.0.0.1:40530.service: Deactivated successfully. Apr 14 00:26:29.804705 systemd[1]: session-52.scope: Deactivated successfully. Apr 14 00:26:29.824607 systemd-logind[1460]: Session 52 logged out. Waiting for processes to exit. Apr 14 00:26:29.835351 systemd-logind[1460]: Removed session 52. Apr 14 00:26:30.834849 kubelet[2624]: E0414 00:26:30.834225 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:26:34.781771 kubelet[2624]: E0414 00:26:34.781619 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:26:34.811138 systemd[1]: Started sshd@52-10.0.0.74:22-10.0.0.1:40542.service - OpenSSH per-connection server daemon (10.0.0.1:40542). Apr 14 00:26:35.095301 sshd[6057]: Accepted publickey for core from 10.0.0.1 port 40542 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:26:35.103257 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:26:35.195914 systemd-logind[1460]: New session 53 of user core. Apr 14 00:26:35.249164 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 14 00:26:36.741759 sshd[6057]: pam_unix(sshd:session): session closed for user core Apr 14 00:26:36.803482 systemd-logind[1460]: Session 53 logged out. Waiting for processes to exit. Apr 14 00:26:36.809778 systemd[1]: sshd@52-10.0.0.74:22-10.0.0.1:40542.service: Deactivated successfully. Apr 14 00:26:36.849280 systemd[1]: session-53.scope: Deactivated successfully. Apr 14 00:26:36.923212 systemd-logind[1460]: Removed session 53. Apr 14 00:26:42.031841 systemd[1]: Started sshd@53-10.0.0.74:22-10.0.0.1:37134.service - OpenSSH per-connection server daemon (10.0.0.1:37134). Apr 14 00:26:42.307219 sshd[6094]: Accepted publickey for core from 10.0.0.1 port 37134 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:26:42.313166 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:26:42.365011 systemd-logind[1460]: New session 54 of user core. Apr 14 00:26:42.418345 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 14 00:26:44.337205 sshd[6094]: pam_unix(sshd:session): session closed for user core Apr 14 00:26:44.382031 systemd[1]: sshd@53-10.0.0.74:22-10.0.0.1:37134.service: Deactivated successfully. Apr 14 00:26:44.405817 systemd[1]: session-54.scope: Deactivated successfully. Apr 14 00:26:44.414853 systemd[1]: session-54.scope: Consumed 1.094s CPU time. Apr 14 00:26:44.435569 systemd-logind[1460]: Session 54 logged out. Waiting for processes to exit. Apr 14 00:26:44.446383 systemd-logind[1460]: Removed session 54. Apr 14 00:26:49.462592 systemd[1]: Started sshd@54-10.0.0.74:22-10.0.0.1:33454.service - OpenSSH per-connection server daemon (10.0.0.1:33454). Apr 14 00:26:49.773451 sshd[6151]: Accepted publickey for core from 10.0.0.1 port 33454 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:26:49.798457 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:26:49.900185 systemd-logind[1460]: New session 55 of user core. Apr 14 00:26:50.058340 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 14 00:26:52.071656 sshd[6151]: pam_unix(sshd:session): session closed for user core Apr 14 00:26:52.132854 systemd[1]: sshd@54-10.0.0.74:22-10.0.0.1:33454.service: Deactivated successfully. Apr 14 00:26:52.163821 systemd[1]: session-55.scope: Deactivated successfully. Apr 14 00:26:52.173811 systemd[1]: session-55.scope: Consumed 1.125s CPU time. Apr 14 00:26:52.218757 systemd-logind[1460]: Session 55 logged out. Waiting for processes to exit. Apr 14 00:26:52.229195 systemd-logind[1460]: Removed session 55. Apr 14 00:26:52.804349 kubelet[2624]: E0414 00:26:52.803858 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:26:53.806146 kubelet[2624]: E0414 00:26:53.805171 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:26:57.232837 systemd[1]: Started sshd@55-10.0.0.74:22-10.0.0.1:34306.service - OpenSSH per-connection server daemon (10.0.0.1:34306). Apr 14 00:26:57.664856 sshd[6187]: Accepted publickey for core from 10.0.0.1 port 34306 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:26:57.670154 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:26:57.722842 systemd-logind[1460]: New session 56 of user core. Apr 14 00:26:57.805321 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 14 00:26:59.064665 sshd[6187]: pam_unix(sshd:session): session closed for user core Apr 14 00:26:59.098105 systemd[1]: sshd@55-10.0.0.74:22-10.0.0.1:34306.service: Deactivated successfully. Apr 14 00:26:59.119292 systemd[1]: session-56.scope: Deactivated successfully. Apr 14 00:26:59.144083 systemd-logind[1460]: Session 56 logged out. Waiting for processes to exit. Apr 14 00:26:59.150817 systemd-logind[1460]: Removed session 56. Apr 14 00:27:04.233778 systemd[1]: Started sshd@56-10.0.0.74:22-10.0.0.1:34322.service - OpenSSH per-connection server daemon (10.0.0.1:34322). Apr 14 00:27:04.467780 sshd[6241]: Accepted publickey for core from 10.0.0.1 port 34322 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:27:04.479976 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:27:04.507623 systemd-logind[1460]: New session 57 of user core. Apr 14 00:27:04.618796 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 14 00:27:06.232782 sshd[6241]: pam_unix(sshd:session): session closed for user core Apr 14 00:27:06.288592 systemd[1]: sshd@56-10.0.0.74:22-10.0.0.1:34322.service: Deactivated successfully. Apr 14 00:27:06.315451 systemd[1]: session-57.scope: Deactivated successfully. Apr 14 00:27:06.316656 systemd[1]: session-57.scope: Consumed 1.189s CPU time. Apr 14 00:27:06.322722 systemd-logind[1460]: Session 57 logged out. Waiting for processes to exit. Apr 14 00:27:06.327453 systemd-logind[1460]: Removed session 57. Apr 14 00:27:08.798103 kubelet[2624]: E0414 00:27:08.797849 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:27:11.422365 systemd[1]: Started sshd@57-10.0.0.74:22-10.0.0.1:55256.service - OpenSSH per-connection server daemon (10.0.0.1:55256). Apr 14 00:27:11.717035 sshd[6278]: Accepted publickey for core from 10.0.0.1 port 55256 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:27:11.731747 sshd[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:27:11.815210 systemd-logind[1460]: New session 58 of user core. Apr 14 00:27:11.875469 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 14 00:27:13.721952 sshd[6278]: pam_unix(sshd:session): session closed for user core Apr 14 00:27:13.745943 systemd[1]: sshd@57-10.0.0.74:22-10.0.0.1:55256.service: Deactivated successfully. Apr 14 00:27:13.798066 systemd[1]: session-58.scope: Deactivated successfully. Apr 14 00:27:13.799351 systemd[1]: session-58.scope: Consumed 1.185s CPU time. Apr 14 00:27:13.820429 systemd-logind[1460]: Session 58 logged out. Waiting for processes to exit. Apr 14 00:27:13.837315 systemd-logind[1460]: Removed session 58. Apr 14 00:27:18.966619 systemd[1]: Started sshd@58-10.0.0.74:22-10.0.0.1:35960.service - OpenSSH per-connection server daemon (10.0.0.1:35960). Apr 14 00:27:19.320929 sshd[6313]: Accepted publickey for core from 10.0.0.1 port 35960 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:27:19.378632 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:27:19.449702 systemd-logind[1460]: New session 59 of user core. Apr 14 00:27:19.532252 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 14 00:27:21.346627 sshd[6313]: pam_unix(sshd:session): session closed for user core Apr 14 00:27:21.383119 systemd[1]: sshd@58-10.0.0.74:22-10.0.0.1:35960.service: Deactivated successfully. Apr 14 00:27:21.439932 systemd[1]: session-59.scope: Deactivated successfully. Apr 14 00:27:21.499280 systemd-logind[1460]: Session 59 logged out. Waiting for processes to exit. Apr 14 00:27:21.517980 systemd-logind[1460]: Removed session 59. Apr 14 00:27:26.542113 systemd[1]: Started sshd@59-10.0.0.74:22-10.0.0.1:39994.service - OpenSSH per-connection server daemon (10.0.0.1:39994). Apr 14 00:27:26.946682 sshd[6367]: Accepted publickey for core from 10.0.0.1 port 39994 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:27:26.963182 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:27:27.135956 systemd-logind[1460]: New session 60 of user core. Apr 14 00:27:27.237365 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 14 00:27:29.804134 sshd[6367]: pam_unix(sshd:session): session closed for user core Apr 14 00:27:29.837194 systemd[1]: sshd@59-10.0.0.74:22-10.0.0.1:39994.service: Deactivated successfully. Apr 14 00:27:29.901067 systemd[1]: session-60.scope: Deactivated successfully. Apr 14 00:27:29.901703 systemd[1]: session-60.scope: Consumed 1.361s CPU time. Apr 14 00:27:29.924042 systemd-logind[1460]: Session 60 logged out. Waiting for processes to exit. Apr 14 00:27:29.930491 systemd-logind[1460]: Removed session 60. Apr 14 00:27:34.920112 systemd[1]: Started sshd@60-10.0.0.74:22-10.0.0.1:39996.service - OpenSSH per-connection server daemon (10.0.0.1:39996). Apr 14 00:27:35.141688 sshd[6408]: Accepted publickey for core from 10.0.0.1 port 39996 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:27:35.162076 sshd[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:27:35.319671 systemd-logind[1460]: New session 61 of user core. Apr 14 00:27:35.341193 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 14 00:27:36.422757 sshd[6408]: pam_unix(sshd:session): session closed for user core Apr 14 00:27:36.454223 systemd[1]: sshd@60-10.0.0.74:22-10.0.0.1:39996.service: Deactivated successfully. Apr 14 00:27:36.505105 systemd[1]: session-61.scope: Deactivated successfully. Apr 14 00:27:36.514803 systemd-logind[1460]: Session 61 logged out. Waiting for processes to exit. Apr 14 00:27:36.527032 systemd-logind[1460]: Removed session 61. Apr 14 00:27:36.803091 kubelet[2624]: E0414 00:27:36.797318 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:27:41.646695 systemd[1]: Started sshd@61-10.0.0.74:22-10.0.0.1:41046.service - OpenSSH per-connection server daemon (10.0.0.1:41046). Apr 14 00:27:41.879580 sshd[6458]: Accepted publickey for core from 10.0.0.1 port 41046 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:27:41.901178 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:27:42.078607 systemd-logind[1460]: New session 62 of user core. Apr 14 00:27:42.088738 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 14 00:27:42.798474 kubelet[2624]: E0414 00:27:42.792557 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:27:43.395838 sshd[6458]: pam_unix(sshd:session): session closed for user core Apr 14 00:27:43.448934 systemd[1]: sshd@61-10.0.0.74:22-10.0.0.1:41046.service: Deactivated successfully. Apr 14 00:27:43.513102 systemd[1]: session-62.scope: Deactivated successfully. Apr 14 00:27:43.536688 systemd-logind[1460]: Session 62 logged out. Waiting for processes to exit. Apr 14 00:27:43.556728 systemd-logind[1460]: Removed session 62. Apr 14 00:27:48.623278 systemd[1]: Started sshd@62-10.0.0.74:22-10.0.0.1:51000.service - OpenSSH per-connection server daemon (10.0.0.1:51000). Apr 14 00:27:49.010601 sshd[6496]: Accepted publickey for core from 10.0.0.1 port 51000 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:27:49.026195 sshd[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:27:49.145831 systemd-logind[1460]: New session 63 of user core. Apr 14 00:27:49.181124 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 14 00:27:50.135159 sshd[6496]: pam_unix(sshd:session): session closed for user core Apr 14 00:27:50.177292 systemd-logind[1460]: Session 63 logged out. Waiting for processes to exit. Apr 14 00:27:50.179662 systemd[1]: sshd@62-10.0.0.74:22-10.0.0.1:51000.service: Deactivated successfully. Apr 14 00:27:50.198937 systemd[1]: session-63.scope: Deactivated successfully. Apr 14 00:27:50.222194 systemd-logind[1460]: Removed session 63. Apr 14 00:27:55.274040 systemd[1]: Started sshd@63-10.0.0.74:22-10.0.0.1:51008.service - OpenSSH per-connection server daemon (10.0.0.1:51008). Apr 14 00:27:55.507800 sshd[6536]: Accepted publickey for core from 10.0.0.1 port 51008 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:27:55.525183 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:27:55.699082 systemd-logind[1460]: New session 64 of user core. Apr 14 00:27:55.730387 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 14 00:27:56.742290 sshd[6536]: pam_unix(sshd:session): session closed for user core Apr 14 00:27:56.785623 kubelet[2624]: E0414 00:27:56.782899 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:27:56.837338 systemd[1]: sshd@63-10.0.0.74:22-10.0.0.1:51008.service: Deactivated successfully. Apr 14 00:27:56.890191 systemd[1]: session-64.scope: Deactivated successfully. Apr 14 00:27:56.897286 systemd-logind[1460]: Session 64 logged out. Waiting for processes to exit. Apr 14 00:27:56.927928 systemd[1]: Started sshd@64-10.0.0.74:22-10.0.0.1:54854.service - OpenSSH per-connection server daemon (10.0.0.1:54854). Apr 14 00:27:56.934707 systemd-logind[1460]: Removed session 64. Apr 14 00:27:57.213693 sshd[6564]: Accepted publickey for core from 10.0.0.1 port 54854 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:27:57.229261 sshd[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:27:57.319608 systemd-logind[1460]: New session 65 of user core. Apr 14 00:27:57.343725 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 14 00:27:57.787033 kubelet[2624]: E0414 00:27:57.786740 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:28:00.387132 sshd[6564]: pam_unix(sshd:session): session closed for user core Apr 14 00:28:00.506041 systemd[1]: sshd@64-10.0.0.74:22-10.0.0.1:54854.service: Deactivated successfully. Apr 14 00:28:00.527041 systemd[1]: session-65.scope: Deactivated successfully. Apr 14 00:28:00.533534 systemd[1]: session-65.scope: Consumed 1.777s CPU time. Apr 14 00:28:00.601318 systemd-logind[1460]: Session 65 logged out. Waiting for processes to exit. Apr 14 00:28:00.629497 systemd[1]: Started sshd@65-10.0.0.74:22-10.0.0.1:54868.service - OpenSSH per-connection server daemon (10.0.0.1:54868). Apr 14 00:28:00.634209 systemd-logind[1460]: Removed session 65. Apr 14 00:28:01.130145 sshd[6582]: Accepted publickey for core from 10.0.0.1 port 54868 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:28:01.145975 sshd[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:28:01.294177 systemd-logind[1460]: New session 66 of user core. Apr 14 00:28:01.317171 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 14 00:28:08.807745 kubelet[2624]: E0414 00:28:08.807392 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:28:19.427840 sshd[6582]: pam_unix(sshd:session): session closed for user core Apr 14 00:28:19.513178 systemd[1]: sshd@65-10.0.0.74:22-10.0.0.1:54868.service: Deactivated successfully. Apr 14 00:28:19.550768 systemd[1]: session-66.scope: Deactivated successfully. Apr 14 00:28:19.552491 systemd[1]: session-66.scope: Consumed 7.788s CPU time. Apr 14 00:28:19.572383 systemd-logind[1460]: Session 66 logged out. Waiting for processes to exit. Apr 14 00:28:19.624708 systemd[1]: Started sshd@66-10.0.0.74:22-10.0.0.1:48796.service - OpenSSH per-connection server daemon (10.0.0.1:48796). Apr 14 00:28:19.634318 systemd-logind[1460]: Removed session 66. Apr 14 00:28:19.818510 kubelet[2624]: E0414 00:28:19.813016 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:28:19.979803 sshd[6679]: Accepted publickey for core from 10.0.0.1 port 48796 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:28:19.990317 sshd[6679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:28:20.053157 systemd-logind[1460]: New session 67 of user core. Apr 14 00:28:20.117268 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 14 00:28:24.823085 kubelet[2624]: E0414 00:28:24.822834 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:28:25.322131 sshd[6679]: pam_unix(sshd:session): session closed for user core Apr 14 00:28:25.372147 systemd[1]: sshd@66-10.0.0.74:22-10.0.0.1:48796.service: Deactivated successfully. Apr 14 00:28:25.380861 systemd[1]: session-67.scope: Deactivated successfully. Apr 14 00:28:25.383712 systemd[1]: session-67.scope: Consumed 2.305s CPU time. Apr 14 00:28:25.414851 systemd-logind[1460]: Session 67 logged out. Waiting for processes to exit. Apr 14 00:28:25.515274 systemd[1]: Started sshd@67-10.0.0.74:22-10.0.0.1:58116.service - OpenSSH per-connection server daemon (10.0.0.1:58116). Apr 14 00:28:25.519198 systemd-logind[1460]: Removed session 67. Apr 14 00:28:25.752634 sshd[6711]: Accepted publickey for core from 10.0.0.1 port 58116 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:28:25.781273 sshd[6711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:28:25.926025 systemd-logind[1460]: New session 68 of user core. Apr 14 00:28:25.959136 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 14 00:28:28.593260 sshd[6711]: pam_unix(sshd:session): session closed for user core Apr 14 00:28:28.622158 systemd[1]: sshd@67-10.0.0.74:22-10.0.0.1:58116.service: Deactivated successfully. Apr 14 00:28:28.646397 systemd[1]: session-68.scope: Deactivated successfully. Apr 14 00:28:28.648757 systemd[1]: session-68.scope: Consumed 1.153s CPU time. Apr 14 00:28:28.721776 systemd-logind[1460]: Session 68 logged out. Waiting for processes to exit. Apr 14 00:28:28.729303 systemd-logind[1460]: Removed session 68. Apr 14 00:28:33.679619 systemd[1]: Started sshd@68-10.0.0.74:22-10.0.0.1:58118.service - OpenSSH per-connection server daemon (10.0.0.1:58118). Apr 14 00:28:34.205626 sshd[6765]: Accepted publickey for core from 10.0.0.1 port 58118 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:28:34.226356 sshd[6765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:28:34.347704 systemd-logind[1460]: New session 69 of user core. Apr 14 00:28:34.405535 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 14 00:28:35.795816 sshd[6765]: pam_unix(sshd:session): session closed for user core Apr 14 00:28:35.966952 systemd[1]: sshd@68-10.0.0.74:22-10.0.0.1:58118.service: Deactivated successfully. Apr 14 00:28:35.994849 systemd[1]: session-69.scope: Deactivated successfully. Apr 14 00:28:36.085342 systemd-logind[1460]: Session 69 logged out. Waiting for processes to exit. Apr 14 00:28:36.110762 systemd-logind[1460]: Removed session 69. Apr 14 00:28:37.787349 kubelet[2624]: E0414 00:28:37.786737 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:28:40.965850 systemd[1]: Started sshd@69-10.0.0.74:22-10.0.0.1:41690.service - OpenSSH per-connection server daemon (10.0.0.1:41690). Apr 14 00:28:41.193314 sshd[6802]: Accepted publickey for core from 10.0.0.1 port 41690 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:28:41.205035 sshd[6802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:28:41.327073 systemd-logind[1460]: New session 70 of user core. Apr 14 00:28:41.424103 systemd[1]: Started session-70.scope - Session 70 of User core. Apr 14 00:28:42.983839 sshd[6802]: pam_unix(sshd:session): session closed for user core Apr 14 00:28:43.098381 systemd[1]: sshd@69-10.0.0.74:22-10.0.0.1:41690.service: Deactivated successfully. Apr 14 00:28:43.115644 systemd[1]: session-70.scope: Deactivated successfully. Apr 14 00:28:43.132054 systemd-logind[1460]: Session 70 logged out. Waiting for processes to exit. Apr 14 00:28:43.146003 systemd-logind[1460]: Removed session 70. Apr 14 00:28:45.826111 kubelet[2624]: E0414 00:28:45.825150 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:28:48.141035 systemd[1]: Started sshd@70-10.0.0.74:22-10.0.0.1:51290.service - OpenSSH per-connection server daemon (10.0.0.1:51290). Apr 14 00:28:48.413161 sshd[6846]: Accepted publickey for core from 10.0.0.1 port 51290 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:28:48.424044 sshd[6846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:28:48.522696 systemd-logind[1460]: New session 71 of user core. Apr 14 00:28:48.559678 systemd[1]: Started session-71.scope - Session 71 of User core. Apr 14 00:28:49.821512 sshd[6846]: pam_unix(sshd:session): session closed for user core Apr 14 00:28:49.901007 systemd[1]: sshd@70-10.0.0.74:22-10.0.0.1:51290.service: Deactivated successfully. Apr 14 00:28:49.925335 systemd[1]: session-71.scope: Deactivated successfully. Apr 14 00:28:49.938451 systemd-logind[1460]: Session 71 logged out. Waiting for processes to exit. Apr 14 00:28:49.947718 systemd-logind[1460]: Removed session 71. Apr 14 00:28:54.986371 systemd[1]: Started sshd@71-10.0.0.74:22-10.0.0.1:51298.service - OpenSSH per-connection server daemon (10.0.0.1:51298). Apr 14 00:28:55.230237 sshd[6895]: Accepted publickey for core from 10.0.0.1 port 51298 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:28:55.239240 sshd[6895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:28:55.486453 systemd-logind[1460]: New session 72 of user core. Apr 14 00:28:55.505218 systemd[1]: Started session-72.scope - Session 72 of User core. Apr 14 00:28:56.805893 sshd[6895]: pam_unix(sshd:session): session closed for user core Apr 14 00:28:56.823101 systemd[1]: sshd@71-10.0.0.74:22-10.0.0.1:51298.service: Deactivated successfully. Apr 14 00:28:56.913990 systemd[1]: session-72.scope: Deactivated successfully. Apr 14 00:28:56.917648 systemd-logind[1460]: Session 72 logged out. Waiting for processes to exit. Apr 14 00:28:56.920659 systemd-logind[1460]: Removed session 72. Apr 14 00:29:01.946928 systemd[1]: Started sshd@72-10.0.0.74:22-10.0.0.1:36562.service - OpenSSH per-connection server daemon (10.0.0.1:36562). Apr 14 00:29:02.196181 sshd[6934]: Accepted publickey for core from 10.0.0.1 port 36562 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:02.222242 sshd[6934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:02.319335 systemd-logind[1460]: New session 73 of user core. Apr 14 00:29:02.346866 systemd[1]: Started session-73.scope - Session 73 of User core. Apr 14 00:29:03.536509 sshd[6934]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:03.587698 systemd[1]: sshd@72-10.0.0.74:22-10.0.0.1:36562.service: Deactivated successfully. Apr 14 00:29:03.597245 systemd[1]: session-73.scope: Deactivated successfully. Apr 14 00:29:03.612644 systemd-logind[1460]: Session 73 logged out. Waiting for processes to exit. Apr 14 00:29:03.621579 systemd-logind[1460]: Removed session 73. Apr 14 00:29:08.839691 systemd[1]: Started sshd@73-10.0.0.74:22-10.0.0.1:58926.service - OpenSSH per-connection server daemon (10.0.0.1:58926). Apr 14 00:29:09.171307 sshd[6972]: Accepted publickey for core from 10.0.0.1 port 58926 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:09.192001 sshd[6972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:09.320518 systemd-logind[1460]: New session 74 of user core. Apr 14 00:29:09.404653 systemd[1]: Started session-74.scope - Session 74 of User core. Apr 14 00:29:10.995775 sshd[6972]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:11.105351 systemd[1]: sshd@73-10.0.0.74:22-10.0.0.1:58926.service: Deactivated successfully. Apr 14 00:29:11.141836 systemd[1]: session-74.scope: Deactivated successfully. Apr 14 00:29:11.199120 systemd-logind[1460]: Session 74 logged out. Waiting for processes to exit. Apr 14 00:29:11.233676 systemd-logind[1460]: Removed session 74. Apr 14 00:29:14.781886 kubelet[2624]: E0414 00:29:14.781660 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:29:16.129080 systemd[1]: Started sshd@74-10.0.0.74:22-10.0.0.1:44720.service - OpenSSH per-connection server daemon (10.0.0.1:44720). Apr 14 00:29:16.318342 sshd[7022]: Accepted publickey for core from 10.0.0.1 port 44720 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:16.339837 sshd[7022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:16.445896 systemd-logind[1460]: New session 75 of user core. Apr 14 00:29:16.495847 systemd[1]: Started session-75.scope - Session 75 of User core. Apr 14 00:29:17.418766 sshd[7022]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:17.446631 systemd[1]: sshd@74-10.0.0.74:22-10.0.0.1:44720.service: Deactivated successfully. Apr 14 00:29:17.486178 systemd[1]: session-75.scope: Deactivated successfully. Apr 14 00:29:17.492972 systemd-logind[1460]: Session 75 logged out. Waiting for processes to exit. Apr 14 00:29:17.495316 systemd-logind[1460]: Removed session 75. Apr 14 00:29:22.525357 systemd[1]: Started sshd@75-10.0.0.74:22-10.0.0.1:44730.service - OpenSSH per-connection server daemon (10.0.0.1:44730). Apr 14 00:29:22.694306 sshd[7062]: Accepted publickey for core from 10.0.0.1 port 44730 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:22.702175 sshd[7062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:22.744194 systemd-logind[1460]: New session 76 of user core. Apr 14 00:29:22.777791 systemd[1]: Started session-76.scope - Session 76 of User core. Apr 14 00:29:23.511156 sshd[7062]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:23.531811 systemd[1]: sshd@75-10.0.0.74:22-10.0.0.1:44730.service: Deactivated successfully. Apr 14 00:29:23.625172 systemd[1]: session-76.scope: Deactivated successfully. Apr 14 00:29:23.635043 systemd-logind[1460]: Session 76 logged out. Waiting for processes to exit. Apr 14 00:29:23.642052 systemd-logind[1460]: Removed session 76. Apr 14 00:29:24.787372 kubelet[2624]: E0414 00:29:24.786812 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:29:25.781754 kubelet[2624]: E0414 00:29:25.781655 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:29:27.804003 kubelet[2624]: E0414 00:29:27.803291 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:29:28.525238 systemd[1]: Started sshd@76-10.0.0.74:22-10.0.0.1:48024.service - OpenSSH per-connection server daemon (10.0.0.1:48024). Apr 14 00:29:28.588795 sshd[7097]: Accepted publickey for core from 10.0.0.1 port 48024 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:28.593768 sshd[7097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:28.602067 systemd-logind[1460]: New session 77 of user core. Apr 14 00:29:28.620236 systemd[1]: Started session-77.scope - Session 77 of User core. Apr 14 00:29:29.169790 sshd[7097]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:29.208274 systemd[1]: sshd@76-10.0.0.74:22-10.0.0.1:48024.service: Deactivated successfully. Apr 14 00:29:29.232294 systemd[1]: session-77.scope: Deactivated successfully. Apr 14 00:29:29.252927 systemd-logind[1460]: Session 77 logged out. Waiting for processes to exit. Apr 14 00:29:29.270334 systemd-logind[1460]: Removed session 77. Apr 14 00:29:34.210213 systemd[1]: Started sshd@77-10.0.0.74:22-10.0.0.1:48034.service - OpenSSH per-connection server daemon (10.0.0.1:48034). Apr 14 00:29:34.321695 sshd[7131]: Accepted publickey for core from 10.0.0.1 port 48034 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:34.332200 sshd[7131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:34.419033 systemd-logind[1460]: New session 78 of user core. Apr 14 00:29:34.487168 systemd[1]: Started session-78.scope - Session 78 of User core. Apr 14 00:29:35.492067 sshd[7131]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:35.522936 systemd[1]: sshd@77-10.0.0.74:22-10.0.0.1:48034.service: Deactivated successfully. Apr 14 00:29:35.533735 systemd[1]: session-78.scope: Deactivated successfully. Apr 14 00:29:35.541247 systemd-logind[1460]: Session 78 logged out. Waiting for processes to exit. Apr 14 00:29:35.558777 systemd-logind[1460]: Removed session 78. Apr 14 00:29:37.789040 kubelet[2624]: E0414 00:29:37.788550 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:29:40.549707 systemd[1]: Started sshd@78-10.0.0.74:22-10.0.0.1:43660.service - OpenSSH per-connection server daemon (10.0.0.1:43660). Apr 14 00:29:40.680371 sshd[7168]: Accepted publickey for core from 10.0.0.1 port 43660 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:40.691849 sshd[7168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:40.733963 systemd-logind[1460]: New session 79 of user core. Apr 14 00:29:40.751915 systemd[1]: Started session-79.scope - Session 79 of User core. Apr 14 00:29:41.329866 sshd[7168]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:41.341098 systemd[1]: sshd@78-10.0.0.74:22-10.0.0.1:43660.service: Deactivated successfully. Apr 14 00:29:41.349297 systemd[1]: session-79.scope: Deactivated successfully. Apr 14 00:29:41.360636 systemd-logind[1460]: Session 79 logged out. Waiting for processes to exit. Apr 14 00:29:41.368754 systemd-logind[1460]: Removed session 79. Apr 14 00:29:46.355073 systemd[1]: Started sshd@79-10.0.0.74:22-10.0.0.1:52914.service - OpenSSH per-connection server daemon (10.0.0.1:52914). Apr 14 00:29:46.412147 sshd[7204]: Accepted publickey for core from 10.0.0.1 port 52914 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:46.432930 sshd[7204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:46.490190 systemd-logind[1460]: New session 80 of user core. Apr 14 00:29:46.518963 systemd[1]: Started session-80.scope - Session 80 of User core. Apr 14 00:29:46.906154 sshd[7204]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:46.927165 systemd[1]: sshd@79-10.0.0.74:22-10.0.0.1:52914.service: Deactivated successfully. Apr 14 00:29:46.945283 systemd[1]: session-80.scope: Deactivated successfully. Apr 14 00:29:46.948132 systemd-logind[1460]: Session 80 logged out. Waiting for processes to exit. Apr 14 00:29:46.952920 systemd-logind[1460]: Removed session 80. Apr 14 00:29:51.942696 systemd[1]: Started sshd@80-10.0.0.74:22-10.0.0.1:52916.service - OpenSSH per-connection server daemon (10.0.0.1:52916). Apr 14 00:29:52.063465 sshd[7254]: Accepted publickey for core from 10.0.0.1 port 52916 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:52.065372 sshd[7254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:52.074383 systemd-logind[1460]: New session 81 of user core. Apr 14 00:29:52.090509 systemd[1]: Started session-81.scope - Session 81 of User core. Apr 14 00:29:52.405388 sshd[7254]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:52.408885 systemd[1]: sshd@80-10.0.0.74:22-10.0.0.1:52916.service: Deactivated successfully. Apr 14 00:29:52.412865 systemd[1]: session-81.scope: Deactivated successfully. Apr 14 00:29:52.413800 systemd-logind[1460]: Session 81 logged out. Waiting for processes to exit. Apr 14 00:29:52.414806 systemd-logind[1460]: Removed session 81. Apr 14 00:29:57.505680 systemd[1]: Started sshd@81-10.0.0.74:22-10.0.0.1:43212.service - OpenSSH per-connection server daemon (10.0.0.1:43212). Apr 14 00:29:57.895262 sshd[7288]: Accepted publickey for core from 10.0.0.1 port 43212 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:29:57.929004 sshd[7288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:29:57.993887 systemd-logind[1460]: New session 82 of user core. Apr 14 00:29:58.024892 systemd[1]: Started session-82.scope - Session 82 of User core. Apr 14 00:29:58.404835 sshd[7288]: pam_unix(sshd:session): session closed for user core Apr 14 00:29:58.413588 systemd[1]: sshd@81-10.0.0.74:22-10.0.0.1:43212.service: Deactivated successfully. Apr 14 00:29:58.416631 systemd[1]: session-82.scope: Deactivated successfully. Apr 14 00:29:58.428988 systemd-logind[1460]: Session 82 logged out. Waiting for processes to exit. Apr 14 00:29:58.442870 systemd-logind[1460]: Removed session 82. Apr 14 00:30:03.516654 systemd[1]: Started sshd@82-10.0.0.74:22-10.0.0.1:43218.service - OpenSSH per-connection server daemon (10.0.0.1:43218). Apr 14 00:30:03.539255 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 14 00:30:03.622804 systemd-tmpfiles[7330]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 00:30:03.632303 systemd-tmpfiles[7330]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 00:30:03.640276 sshd[7329]: Accepted publickey for core from 10.0.0.1 port 43218 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 14 00:30:03.642796 systemd-tmpfiles[7330]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 00:30:03.643082 systemd-tmpfiles[7330]: ACLs are not supported, ignoring. Apr 14 00:30:03.643128 systemd-tmpfiles[7330]: ACLs are not supported, ignoring. Apr 14 00:30:03.647750 systemd-tmpfiles[7330]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:30:03.647982 sshd[7329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:30:03.647823 systemd-tmpfiles[7330]: Skipping /boot Apr 14 00:30:03.715312 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 14 00:30:03.716642 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 14 00:30:03.731555 systemd-logind[1460]: New session 83 of user core. Apr 14 00:30:03.743730 systemd[1]: Started session-83.scope - Session 83 of User core. Apr 14 00:30:04.143970 sshd[7329]: pam_unix(sshd:session): session closed for user core Apr 14 00:30:04.149845 systemd[1]: sshd@82-10.0.0.74:22-10.0.0.1:43218.service: Deactivated successfully. Apr 14 00:30:04.158361 systemd[1]: session-83.scope: Deactivated successfully. Apr 14 00:30:04.163348 systemd-logind[1460]: Session 83 logged out. Waiting for processes to exit. Apr 14 00:30:04.166150 systemd-logind[1460]: Removed session 83. Apr 14 00:30:04.784622 kubelet[2624]: E0414 00:30:04.783987 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"