Apr 13 23:57:35.647559 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 23:57:35.647589 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:57:35.647604 kernel: BIOS-provided physical RAM map: Apr 13 23:57:35.647612 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 23:57:35.647619 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 13 23:57:35.647628 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 13 23:57:35.647637 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 13 23:57:35.647646 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 13 23:57:35.647654 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 13 23:57:35.647662 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 13 23:57:35.647673 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 13 23:57:35.647681 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 13 23:57:35.647689 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 13 23:57:35.647697 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 13 23:57:35.647708 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 13 23:57:35.647717 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 13 23:57:35.647727 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 13 23:57:35.647736 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 13 23:57:35.647745 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 13 23:57:35.647753 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 23:57:35.647762 kernel: NX (Execute Disable) protection: active Apr 13 23:57:35.647771 kernel: APIC: Static calls initialized Apr 13 23:57:35.647779 kernel: efi: EFI v2.7 by EDK II Apr 13 23:57:35.647788 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 13 23:57:35.647797 kernel: SMBIOS 2.8 present. Apr 13 23:57:35.647806 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 13 23:57:35.647814 kernel: Hypervisor detected: KVM Apr 13 23:57:35.647825 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 23:57:35.647834 kernel: kvm-clock: using sched offset of 7244961470 cycles Apr 13 23:57:35.647843 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 23:57:35.647852 kernel: tsc: Detected 2793.438 MHz processor Apr 13 23:57:35.647860 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 23:57:35.647869 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 23:57:35.647878 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 13 23:57:35.647886 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 23:57:35.647895 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 23:57:35.647905 kernel: Using GB pages for direct mapping Apr 13 23:57:35.647931 kernel: Secure boot disabled Apr 13 23:57:35.647939 kernel: ACPI: Early table checksum verification disabled Apr 13 23:57:35.647947 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 13 23:57:35.647958 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 13 23:57:35.647966 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:57:35.647973 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:57:35.647983 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 13 23:57:35.647991 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:57:35.647999 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:57:35.648008 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:57:35.648018 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 23:57:35.648026 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 23:57:35.648034 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 13 23:57:35.648045 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 13 23:57:35.648053 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 13 23:57:35.648061 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 13 23:57:35.648070 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 13 23:57:35.648077 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 13 23:57:35.648235 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 13 23:57:35.648245 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 13 23:57:35.648253 kernel: No NUMA configuration found Apr 13 23:57:35.648263 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 13 23:57:35.648277 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 13 23:57:35.648284 kernel: Zone ranges: Apr 13 23:57:35.648323 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 23:57:35.648332 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 13 23:57:35.648339 kernel: Normal empty Apr 13 23:57:35.648348 kernel: Movable zone start for each node Apr 13 23:57:35.648356 kernel: Early memory node ranges Apr 13 23:57:35.648364 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 23:57:35.648372 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 13 23:57:35.648383 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 13 23:57:35.648391 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 13 23:57:35.648400 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 13 23:57:35.648408 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 13 23:57:35.648418 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 13 23:57:35.648427 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 23:57:35.648436 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 23:57:35.648445 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 13 23:57:35.648454 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 23:57:35.648463 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 13 23:57:35.648477 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 13 23:57:35.648484 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 13 23:57:35.648492 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 23:57:35.648501 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 23:57:35.648510 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 23:57:35.648519 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 23:57:35.648527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 23:57:35.648535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 23:57:35.648545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 23:57:35.648555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 23:57:35.648564 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 23:57:35.648572 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 23:57:35.648580 kernel: TSC deadline timer available Apr 13 23:57:35.648589 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 13 23:57:35.648599 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 23:57:35.648607 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 23:57:35.648617 kernel: kvm-guest: setup PV sched yield Apr 13 23:57:35.648625 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 23:57:35.648636 kernel: Booting paravirtualized kernel on KVM Apr 13 23:57:35.648645 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 23:57:35.648654 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 13 23:57:35.648663 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 13 23:57:35.648672 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 13 23:57:35.648681 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 13 23:57:35.648690 kernel: kvm-guest: PV spinlocks enabled Apr 13 23:57:35.648698 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 23:57:35.648708 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:57:35.648720 kernel: random: crng init done Apr 13 23:57:35.648729 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 23:57:35.648739 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 23:57:35.648747 kernel: Fallback order for Node 0: 0 Apr 13 23:57:35.648755 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 13 23:57:35.648764 kernel: Policy zone: DMA32 Apr 13 23:57:35.648771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 23:57:35.648779 kernel: Memory: 2394672K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 172124K reserved, 0K cma-reserved) Apr 13 23:57:35.648790 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 13 23:57:35.648799 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 23:57:35.648806 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 23:57:35.648814 kernel: Dynamic Preempt: voluntary Apr 13 23:57:35.648823 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 23:57:35.648840 kernel: rcu: RCU event tracing is enabled. Apr 13 23:57:35.648850 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 13 23:57:35.648859 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 23:57:35.648869 kernel: Rude variant of Tasks RCU enabled. Apr 13 23:57:35.648877 kernel: Tracing variant of Tasks RCU enabled. Apr 13 23:57:35.648885 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 23:57:35.648894 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 13 23:57:35.648906 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 13 23:57:35.648938 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 23:57:35.648947 kernel: Console: colour dummy device 80x25 Apr 13 23:57:35.648955 kernel: printk: console [ttyS0] enabled Apr 13 23:57:35.648964 kernel: ACPI: Core revision 20230628 Apr 13 23:57:35.648974 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 23:57:35.648982 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 23:57:35.648991 kernel: x2apic enabled Apr 13 23:57:35.648999 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 23:57:35.649008 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 23:57:35.649016 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 23:57:35.649024 kernel: kvm-guest: setup PV IPIs Apr 13 23:57:35.649032 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 23:57:35.649041 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:57:35.649051 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 13 23:57:35.649059 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 23:57:35.649067 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 13 23:57:35.649075 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 13 23:57:35.649083 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 23:57:35.649091 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 23:57:35.649099 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 23:57:35.649107 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 13 23:57:35.649117 kernel: RETBleed: Vulnerable Apr 13 23:57:35.649125 kernel: Speculative Store Bypass: Vulnerable Apr 13 23:57:35.649133 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 23:57:35.649141 kernel: GDS: Unknown: Dependent on hypervisor status Apr 13 23:57:35.649149 kernel: active return thunk: its_return_thunk Apr 13 23:57:35.649157 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 23:57:35.649165 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 23:57:35.649174 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 23:57:35.649182 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 23:57:35.649192 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 23:57:35.649200 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 23:57:35.649208 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 23:57:35.649216 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 23:57:35.649224 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 23:57:35.649231 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 23:57:35.649239 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 23:57:35.649247 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 13 23:57:35.649255 kernel: Freeing SMP alternatives memory: 32K Apr 13 23:57:35.649265 kernel: pid_max: default: 32768 minimum: 301 Apr 13 23:57:35.649273 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 23:57:35.649281 kernel: landlock: Up and running. Apr 13 23:57:35.649430 kernel: SELinux: Initializing. Apr 13 23:57:35.649439 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:57:35.649447 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 23:57:35.649455 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 13 23:57:35.649463 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:57:35.649472 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:57:35.649488 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 13 23:57:35.649498 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 13 23:57:35.649506 kernel: signal: max sigframe size: 3632 Apr 13 23:57:35.649514 kernel: rcu: Hierarchical SRCU implementation. Apr 13 23:57:35.649522 kernel: rcu: Max phase no-delay instances is 400. Apr 13 23:57:35.649530 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 23:57:35.649538 kernel: smp: Bringing up secondary CPUs ... Apr 13 23:57:35.649546 kernel: smpboot: x86: Booting SMP configuration: Apr 13 23:57:35.649553 kernel: .... node #0, CPUs: #1 #2 #3 Apr 13 23:57:35.649563 kernel: smp: Brought up 1 node, 4 CPUs Apr 13 23:57:35.649571 kernel: smpboot: Max logical packages: 1 Apr 13 23:57:35.649579 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 13 23:57:35.649587 kernel: devtmpfs: initialized Apr 13 23:57:35.649595 kernel: x86/mm: Memory block size: 128MB Apr 13 23:57:35.649603 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 13 23:57:35.649611 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 13 23:57:35.649619 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 13 23:57:35.649628 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 13 23:57:35.649638 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 13 23:57:35.649647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 23:57:35.649656 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 13 23:57:35.649665 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 23:57:35.649675 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 23:57:35.649684 kernel: audit: initializing netlink subsys (disabled) Apr 13 23:57:35.649692 kernel: audit: type=2000 audit(1776124653.255:1): state=initialized audit_enabled=0 res=1 Apr 13 23:57:35.649700 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 23:57:35.649709 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 23:57:35.649720 kernel: cpuidle: using governor menu Apr 13 23:57:35.649730 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 23:57:35.649739 kernel: dca service started, version 1.12.1 Apr 13 23:57:35.649749 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 23:57:35.649758 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 23:57:35.649768 kernel: PCI: Using configuration type 1 for base access Apr 13 23:57:35.649778 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 23:57:35.649787 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 23:57:35.649796 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 23:57:35.649808 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 23:57:35.649817 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 23:57:35.649827 kernel: ACPI: Added _OSI(Module Device) Apr 13 23:57:35.649836 kernel: ACPI: Added _OSI(Processor Device) Apr 13 23:57:35.649846 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 23:57:35.649855 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 23:57:35.649865 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 23:57:35.649875 kernel: ACPI: Interpreter enabled Apr 13 23:57:35.649884 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 23:57:35.649895 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 23:57:35.649904 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 23:57:35.649973 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 23:57:35.649985 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 23:57:35.649995 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 23:57:35.650163 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 23:57:35.650243 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 23:57:35.650360 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 23:57:35.650374 kernel: PCI host bridge to bus 0000:00 Apr 13 23:57:35.650479 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 23:57:35.650561 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 23:57:35.650640 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 23:57:35.650722 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 13 23:57:35.650823 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 23:57:35.650909 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 13 23:57:35.651011 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 23:57:35.651120 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 23:57:35.651218 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 23:57:35.651410 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 13 23:57:35.651506 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 13 23:57:35.651589 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 23:57:35.651711 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 13 23:57:35.651804 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 23:57:35.651907 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 23:57:35.652042 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 13 23:57:35.652152 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 13 23:57:35.652249 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 13 23:57:35.652444 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 13 23:57:35.652547 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 13 23:57:35.652639 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 13 23:57:35.652732 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 13 23:57:35.652832 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 23:57:35.652946 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 13 23:57:35.653041 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 13 23:57:35.653139 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 13 23:57:35.653230 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 13 23:57:35.653434 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 23:57:35.653517 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 23:57:35.653608 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 23:57:35.653686 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 13 23:57:35.653763 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 13 23:57:35.653858 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 23:57:35.654079 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 13 23:57:35.654091 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 23:57:35.654100 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 23:57:35.654108 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 23:57:35.654116 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 23:57:35.654124 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 23:57:35.654132 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 23:57:35.654148 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 23:57:35.654156 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 23:57:35.654164 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 23:57:35.654172 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 23:57:35.654180 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 23:57:35.654189 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 23:57:35.654197 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 23:57:35.654205 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 23:57:35.654213 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 23:57:35.654223 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 23:57:35.654231 kernel: iommu: Default domain type: Translated Apr 13 23:57:35.654239 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 23:57:35.654247 kernel: efivars: Registered efivars operations Apr 13 23:57:35.654255 kernel: PCI: Using ACPI for IRQ routing Apr 13 23:57:35.654263 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 23:57:35.654271 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 13 23:57:35.654279 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 13 23:57:35.654689 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 13 23:57:35.654762 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 13 23:57:35.654951 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 23:57:35.655036 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 23:57:35.655119 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 23:57:35.655131 kernel: vgaarb: loaded Apr 13 23:57:35.655141 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 23:57:35.655151 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 23:57:35.655160 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 23:57:35.655170 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 23:57:35.655183 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 23:57:35.655193 kernel: pnp: PnP ACPI init Apr 13 23:57:35.655504 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 23:57:35.655524 kernel: pnp: PnP ACPI: found 6 devices Apr 13 23:57:35.655534 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 23:57:35.655543 kernel: NET: Registered PF_INET protocol family Apr 13 23:57:35.655553 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 23:57:35.655563 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 23:57:35.655577 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 23:57:35.655586 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 23:57:35.655596 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 23:57:35.655606 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 23:57:35.655616 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:57:35.655627 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 23:57:35.655636 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 23:57:35.655645 kernel: NET: Registered PF_XDP protocol family Apr 13 23:57:35.655753 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 13 23:57:35.655838 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 13 23:57:35.656068 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 23:57:35.656174 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 23:57:35.656250 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 23:57:35.656376 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 13 23:57:35.656700 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 23:57:35.656800 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 13 23:57:35.656823 kernel: PCI: CLS 0 bytes, default 64 Apr 13 23:57:35.656833 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 23:57:35.656874 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 13 23:57:35.656904 kernel: Initialise system trusted keyrings Apr 13 23:57:35.656937 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 23:57:35.656958 kernel: Key type asymmetric registered Apr 13 23:57:35.656989 kernel: Asymmetric key parser 'x509' registered Apr 13 23:57:35.657008 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 23:57:35.657039 kernel: io scheduler mq-deadline registered Apr 13 23:57:35.657063 kernel: io scheduler kyber registered Apr 13 23:57:35.657083 kernel: io scheduler bfq registered Apr 13 23:57:35.657094 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 23:57:35.657113 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 23:57:35.657124 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 23:57:35.657135 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 13 23:57:35.657145 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 23:57:35.657155 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 23:57:35.657164 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 23:57:35.657177 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 23:57:35.657188 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 23:57:35.657597 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 13 23:57:35.657704 kernel: rtc_cmos 00:04: registered as rtc0 Apr 13 23:57:35.657779 kernel: rtc_cmos 00:04: setting system clock to 2026-04-13T23:57:34 UTC (1776124654) Apr 13 23:57:35.657793 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 23:57:35.657875 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 13 23:57:35.657888 kernel: intel_pstate: CPU model not supported Apr 13 23:57:35.658019 kernel: efifb: probing for efifb Apr 13 23:57:35.658029 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 13 23:57:35.658038 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 13 23:57:35.658047 kernel: efifb: scrolling: redraw Apr 13 23:57:35.658056 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 13 23:57:35.658065 kernel: Console: switching to colour frame buffer device 100x37 Apr 13 23:57:35.658074 kernel: fb0: EFI VGA frame buffer device Apr 13 23:57:35.658193 kernel: pstore: Using crash dump compression: deflate Apr 13 23:57:35.658240 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 23:57:35.658253 kernel: NET: Registered PF_INET6 protocol family Apr 13 23:57:35.658264 kernel: Segment Routing with IPv6 Apr 13 23:57:35.658274 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 23:57:35.658350 kernel: NET: Registered PF_PACKET protocol family Apr 13 23:57:35.658378 kernel: Key type dns_resolver registered Apr 13 23:57:35.658389 kernel: IPI shorthand broadcast: enabled Apr 13 23:57:35.658400 kernel: sched_clock: Marking stable (1695012695, 497574137)->(2701212086, -508625254) Apr 13 23:57:35.658411 kernel: registered taskstats version 1 Apr 13 23:57:35.658422 kernel: Loading compiled-in X.509 certificates Apr 13 23:57:35.658457 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 23:57:35.658468 kernel: Key type .fscrypt registered Apr 13 23:57:35.658479 kernel: Key type fscrypt-provisioning registered Apr 13 23:57:35.658489 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 23:57:35.658500 kernel: ima: Allocated hash algorithm: sha1 Apr 13 23:57:35.658510 kernel: ima: No architecture policies found Apr 13 23:57:35.658521 kernel: clk: Disabling unused clocks Apr 13 23:57:35.658532 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 23:57:35.658543 kernel: Write protecting the kernel read-only data: 36864k Apr 13 23:57:35.658556 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 23:57:35.658567 kernel: Run /init as init process Apr 13 23:57:35.658577 kernel: with arguments: Apr 13 23:57:35.658589 kernel: /init Apr 13 23:57:35.658599 kernel: with environment: Apr 13 23:57:35.658610 kernel: HOME=/ Apr 13 23:57:35.658620 kernel: TERM=linux Apr 13 23:57:35.658633 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:57:35.658650 systemd[1]: Detected virtualization kvm. Apr 13 23:57:35.658665 systemd[1]: Detected architecture x86-64. Apr 13 23:57:35.658675 systemd[1]: Running in initrd. Apr 13 23:57:35.658686 systemd[1]: No hostname configured, using default hostname. Apr 13 23:57:35.658696 systemd[1]: Hostname set to . Apr 13 23:57:35.658708 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:57:35.658717 systemd[1]: Queued start job for default target initrd.target. Apr 13 23:57:35.658727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:57:35.658736 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:57:35.658746 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 23:57:35.658755 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:57:35.658765 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 23:57:35.658775 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 23:57:35.658790 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 23:57:35.658800 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 23:57:35.658812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:57:35.658823 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:57:35.658834 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:57:35.658846 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:57:35.658857 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:57:35.658871 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:57:35.658882 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:57:35.658894 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:57:35.658905 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 23:57:35.659142 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 23:57:35.659155 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:57:35.659167 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:57:35.659178 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:57:35.659190 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:57:35.659218 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 23:57:35.659230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:57:35.659241 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 23:57:35.659252 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 23:57:35.659264 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:57:35.659275 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:57:35.659320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:57:35.659332 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 23:57:35.659407 systemd-journald[195]: Collecting audit messages is disabled. Apr 13 23:57:35.659454 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:57:35.659476 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 23:57:35.659718 systemd-journald[195]: Journal started Apr 13 23:57:35.659753 systemd-journald[195]: Runtime Journal (/run/log/journal/a394912dfba54cd2b259ef30e9f5ac64) is 6.0M, max 48.3M, 42.2M free. Apr 13 23:57:35.660951 systemd-modules-load[196]: Inserted module 'overlay' Apr 13 23:57:35.678349 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:57:35.681863 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:57:35.684000 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:57:35.759032 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:57:35.779932 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:57:35.788134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:57:35.797579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:57:35.801431 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:57:35.808900 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 23:57:35.813867 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:57:35.830597 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:57:35.833638 dracut-cmdline[223]: dracut-dracut-053 Apr 13 23:57:35.839460 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 23:57:35.857377 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 23:57:35.865652 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 13 23:57:35.869261 kernel: Bridge firewalling registered Apr 13 23:57:35.870860 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:57:35.895473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:57:35.927513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:57:35.953264 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:57:36.047725 systemd-resolved[271]: Positive Trust Anchors: Apr 13 23:57:36.047758 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:57:36.047791 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:57:36.055496 systemd-resolved[271]: Defaulting to hostname 'linux'. Apr 13 23:57:36.057559 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:57:36.071132 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:57:36.196955 kernel: SCSI subsystem initialized Apr 13 23:57:36.219933 kernel: Loading iSCSI transport class v2.0-870. Apr 13 23:57:36.245406 kernel: iscsi: registered transport (tcp) Apr 13 23:57:36.351200 kernel: iscsi: registered transport (qla4xxx) Apr 13 23:57:36.351411 kernel: QLogic iSCSI HBA Driver Apr 13 23:57:36.618856 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 23:57:36.635464 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 23:57:36.769009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 23:57:36.773907 kernel: device-mapper: uevent: version 1.0.3 Apr 13 23:57:36.773994 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 23:57:36.863741 kernel: raid6: avx512x4 gen() 30864 MB/s Apr 13 23:57:36.883470 kernel: raid6: avx512x2 gen() 30102 MB/s Apr 13 23:57:36.954087 kernel: raid6: avx512x1 gen() 16133 MB/s Apr 13 23:57:36.974008 kernel: raid6: avx2x4 gen() 18123 MB/s Apr 13 23:57:36.992099 kernel: raid6: avx2x2 gen() 16111 MB/s Apr 13 23:57:37.010045 kernel: raid6: avx2x1 gen() 12193 MB/s Apr 13 23:57:37.010129 kernel: raid6: using algorithm avx512x4 gen() 30864 MB/s Apr 13 23:57:37.029572 kernel: raid6: .... xor() 8537 MB/s, rmw enabled Apr 13 23:57:37.029652 kernel: raid6: using avx512x2 recovery algorithm Apr 13 23:57:37.069479 kernel: xor: automatically using best checksumming function avx Apr 13 23:57:37.607171 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 23:57:37.626785 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:57:37.640874 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:57:37.677034 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 13 23:57:37.732735 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:57:37.752069 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 23:57:37.783711 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Apr 13 23:57:37.849255 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:57:37.872866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:57:37.963788 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:57:37.975522 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 23:57:38.014404 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 23:57:38.021236 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:57:38.027912 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:57:38.030978 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:57:38.046840 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 23:57:38.065959 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:57:38.074632 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 13 23:57:38.077422 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 23:57:38.116966 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 13 23:57:38.117273 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 23:57:38.124276 kernel: GPT:9289727 != 19775487 Apr 13 23:57:38.124369 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 23:57:38.124384 kernel: GPT:9289727 != 19775487 Apr 13 23:57:38.124397 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 23:57:38.124411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:57:38.120172 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:57:38.120336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:57:38.134424 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:57:38.150468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:57:38.150763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:57:38.159705 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 23:57:38.159743 kernel: AES CTR mode by8 optimization enabled Apr 13 23:57:38.155228 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:57:38.178181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:57:38.211452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:57:38.211598 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:57:38.240358 kernel: libata version 3.00 loaded. Apr 13 23:57:38.240329 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 13 23:57:38.263607 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (466) Apr 13 23:57:38.270568 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Apr 13 23:57:38.272941 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 23:57:38.273143 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 23:57:38.284081 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 23:57:38.284247 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 23:57:38.286647 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 13 23:57:38.297404 kernel: scsi host0: ahci Apr 13 23:57:38.297580 kernel: scsi host1: ahci Apr 13 23:57:38.299611 kernel: scsi host2: ahci Apr 13 23:57:38.302077 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 13 23:57:38.307787 kernel: scsi host3: ahci Apr 13 23:57:38.312110 kernel: scsi host4: ahci Apr 13 23:57:38.312366 kernel: scsi host5: ahci Apr 13 23:57:38.312531 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 13 23:57:38.315104 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 13 23:57:38.316330 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 13 23:57:38.320717 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 13 23:57:38.320758 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 13 23:57:38.323194 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 13 23:57:38.325646 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 13 23:57:38.333613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:57:38.351343 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 23:57:38.358871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:57:38.411598 disk-uuid[571]: Primary Header is updated. Apr 13 23:57:38.411598 disk-uuid[571]: Secondary Entries is updated. Apr 13 23:57:38.411598 disk-uuid[571]: Secondary Header is updated. Apr 13 23:57:38.424833 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:57:38.431910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:57:38.447094 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 23:57:38.472070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:57:38.649548 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 23:57:38.652447 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 23:57:38.658379 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 23:57:38.658457 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 23:57:38.663768 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 23:57:38.672274 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 23:57:38.672736 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 23:57:38.672769 kernel: ata3.00: applying bridge limits Apr 13 23:57:38.678227 kernel: ata3.00: configured for UDMA/100 Apr 13 23:57:38.684619 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 23:57:38.823477 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 23:57:38.823810 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 23:57:38.840331 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 13 23:57:39.452926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 23:57:39.460764 disk-uuid[572]: The operation has completed successfully. Apr 13 23:57:39.537745 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 23:57:39.541149 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 23:57:39.655561 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 23:57:39.663122 sh[603]: Success Apr 13 23:57:39.713332 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 23:57:39.790264 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 23:57:39.856745 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 23:57:39.879818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 23:57:39.906171 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 23:57:39.906593 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:57:39.906646 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 23:57:39.907554 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 23:57:39.911263 kernel: BTRFS info (device dm-0): using free space tree Apr 13 23:57:39.947883 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 23:57:39.950787 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 23:57:39.971908 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 23:57:39.976900 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 23:57:40.068355 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:57:40.068429 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:57:40.068443 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:57:40.090179 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:57:40.100844 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 23:57:40.104531 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:57:40.115748 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 23:57:40.128856 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 23:57:40.281399 ignition[709]: Ignition 2.19.0 Apr 13 23:57:40.281412 ignition[709]: Stage: fetch-offline Apr 13 23:57:40.281453 ignition[709]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:57:40.281461 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:57:40.281692 ignition[709]: parsed url from cmdline: "" Apr 13 23:57:40.281697 ignition[709]: no config URL provided Apr 13 23:57:40.281702 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 23:57:40.281712 ignition[709]: no config at "/usr/lib/ignition/user.ign" Apr 13 23:57:40.281739 ignition[709]: op(1): [started] loading QEMU firmware config module Apr 13 23:57:40.281744 ignition[709]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 13 23:57:40.296779 ignition[709]: op(1): [finished] loading QEMU firmware config module Apr 13 23:57:40.311582 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:57:40.331770 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:57:40.381925 systemd-networkd[791]: lo: Link UP Apr 13 23:57:40.381975 systemd-networkd[791]: lo: Gained carrier Apr 13 23:57:40.383856 systemd-networkd[791]: Enumeration completed Apr 13 23:57:40.384889 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:57:40.386911 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:57:40.386914 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:57:40.389819 systemd-networkd[791]: eth0: Link UP Apr 13 23:57:40.389823 systemd-networkd[791]: eth0: Gained carrier Apr 13 23:57:40.389831 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:57:40.392243 systemd[1]: Reached target network.target - Network. Apr 13 23:57:40.468086 systemd-networkd[791]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:57:40.544080 ignition[709]: parsing config with SHA512: fc6f23f3457706c449264927361e98f186d35f884e8890bf037baba4a0f30107440951c9a740003a6d16019a8dfda2e56617d4a34ba1ce9a41cf0e17af3ff647 Apr 13 23:57:40.562710 unknown[709]: fetched base config from "system" Apr 13 23:57:40.562800 unknown[709]: fetched user config from "qemu" Apr 13 23:57:40.573056 systemd-resolved[271]: Detected conflict on linux IN A 10.0.0.40 Apr 13 23:57:40.573768 ignition[709]: fetch-offline: fetch-offline passed Apr 13 23:57:40.573073 systemd-resolved[271]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Apr 13 23:57:40.573885 ignition[709]: Ignition finished successfully Apr 13 23:57:40.581592 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:57:40.611821 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 13 23:57:40.624014 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 23:57:40.650471 ignition[795]: Ignition 2.19.0 Apr 13 23:57:40.650491 ignition[795]: Stage: kargs Apr 13 23:57:40.650620 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:57:40.650627 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:57:40.651376 ignition[795]: kargs: kargs passed Apr 13 23:57:40.657539 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 23:57:40.651421 ignition[795]: Ignition finished successfully Apr 13 23:57:40.676591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 23:57:40.727593 ignition[803]: Ignition 2.19.0 Apr 13 23:57:40.727624 ignition[803]: Stage: disks Apr 13 23:57:40.727814 ignition[803]: no configs at "/usr/lib/ignition/base.d" Apr 13 23:57:40.727824 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:57:40.729336 ignition[803]: disks: disks passed Apr 13 23:57:40.734672 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 23:57:40.729407 ignition[803]: Ignition finished successfully Apr 13 23:57:40.739528 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 23:57:40.743767 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 23:57:40.747635 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:57:40.747746 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:57:40.758133 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:57:40.774828 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 23:57:40.835198 systemd-fsck[814]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 23:57:40.850518 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 23:57:40.869342 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 23:57:41.344223 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 23:57:41.349580 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 23:57:41.356650 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 23:57:41.391253 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:57:41.448550 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 23:57:41.453181 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 23:57:41.453241 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 23:57:41.453275 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:57:41.505451 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (822) Apr 13 23:57:41.505521 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:57:41.505537 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:57:41.505550 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:57:41.462191 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 23:57:41.511693 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 23:57:41.524401 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:57:41.527530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:57:41.567524 systemd-networkd[791]: eth0: Gained IPv6LL Apr 13 23:57:41.684320 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 23:57:41.706465 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Apr 13 23:57:41.716535 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 23:57:41.727032 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 23:57:42.129704 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 23:57:42.161427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 23:57:42.175230 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 23:57:42.191617 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 23:57:42.268603 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:57:42.338352 ignition[936]: INFO : Ignition 2.19.0 Apr 13 23:57:42.338352 ignition[936]: INFO : Stage: mount Apr 13 23:57:42.338352 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:57:42.338352 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:57:42.355578 ignition[936]: INFO : mount: mount passed Apr 13 23:57:42.355578 ignition[936]: INFO : Ignition finished successfully Apr 13 23:57:42.361748 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 23:57:42.375571 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 23:57:42.382330 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 23:57:42.391789 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 23:57:42.487939 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (949) Apr 13 23:57:42.497024 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 23:57:42.497089 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 23:57:42.497103 kernel: BTRFS info (device vda6): using free space tree Apr 13 23:57:42.521373 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 23:57:42.528873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 23:57:42.580236 ignition[966]: INFO : Ignition 2.19.0 Apr 13 23:57:42.580236 ignition[966]: INFO : Stage: files Apr 13 23:57:42.580236 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:57:42.580236 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:57:42.595682 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Apr 13 23:57:42.595682 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 23:57:42.595682 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 23:57:42.605706 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 23:57:42.605706 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 23:57:42.618213 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 23:57:42.618213 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 23:57:42.618213 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 23:57:42.618213 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 23:57:42.618213 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 23:57:42.609101 unknown[966]: wrote ssh authorized keys file for user: core Apr 13 23:57:42.890969 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 23:57:43.163208 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 23:57:43.163208 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 23:57:43.163208 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 23:57:43.163208 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:57:43.226860 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 23:57:43.365837 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 23:57:44.070489 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 23:57:44.085802 ignition[966]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 13 23:57:44.160499 ignition[966]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 13 23:57:44.178937 ignition[966]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 13 23:57:44.252851 kernel: hrtimer: interrupt took 45092895 ns Apr 13 23:57:44.362363 ignition[966]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:57:44.371215 ignition[966]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 13 23:57:44.376210 ignition[966]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 13 23:57:44.376210 ignition[966]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 13 23:57:44.376210 ignition[966]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 23:57:44.376210 ignition[966]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:57:44.376210 ignition[966]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 23:57:44.376210 ignition[966]: INFO : files: files passed Apr 13 23:57:44.376210 ignition[966]: INFO : Ignition finished successfully Apr 13 23:57:44.403005 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 23:57:44.424760 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 23:57:44.429790 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 23:57:44.434465 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 23:57:44.434583 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 23:57:44.454351 initrd-setup-root-after-ignition[994]: grep: /sysroot/oem/oem-release: No such file or directory Apr 13 23:57:44.457619 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:57:44.457619 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:57:44.467411 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 23:57:44.470942 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:57:44.477924 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 23:57:44.548597 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 23:57:44.599185 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 23:57:44.599523 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 23:57:44.604998 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 23:57:44.608432 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 23:57:44.614614 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 23:57:44.636790 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 23:57:44.679945 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:57:44.726644 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 23:57:44.748352 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:57:44.750976 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:57:44.755581 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 23:57:44.763043 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 23:57:44.768614 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 23:57:44.775603 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 23:57:44.775756 systemd[1]: Stopped target basic.target - Basic System. Apr 13 23:57:44.788941 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 23:57:44.792242 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 23:57:44.799746 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 23:57:44.816240 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 23:57:44.816548 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 23:57:44.825449 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 23:57:44.828728 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 23:57:44.837624 systemd[1]: Stopped target swap.target - Swaps. Apr 13 23:57:44.852907 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 23:57:44.853070 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 23:57:44.866511 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:57:44.869411 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:57:44.879093 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 23:57:44.880373 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:57:44.897612 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 23:57:44.897788 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 23:57:44.905120 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 23:57:44.905477 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 23:57:44.911062 systemd[1]: Stopped target paths.target - Path Units. Apr 13 23:57:44.930098 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 23:57:44.933115 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:57:44.940876 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 23:57:44.951940 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 23:57:44.972041 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 23:57:44.972178 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 23:57:44.979231 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 23:57:44.979364 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 23:57:44.987856 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 23:57:44.988045 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 23:57:44.988467 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 23:57:44.988755 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 23:57:45.064106 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 23:57:45.075534 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 23:57:45.081284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 23:57:45.081549 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:57:45.090183 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 23:57:45.093110 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 23:57:45.117216 ignition[1020]: INFO : Ignition 2.19.0 Apr 13 23:57:45.117216 ignition[1020]: INFO : Stage: umount Apr 13 23:57:45.117216 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 23:57:45.117216 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 13 23:57:45.117216 ignition[1020]: INFO : umount: umount passed Apr 13 23:57:45.117216 ignition[1020]: INFO : Ignition finished successfully Apr 13 23:57:45.139322 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 23:57:45.145951 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 23:57:45.146386 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 23:57:45.171426 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 23:57:45.171734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 23:57:45.180021 systemd[1]: Stopped target network.target - Network. Apr 13 23:57:45.188433 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 23:57:45.188569 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 23:57:45.194830 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 23:57:45.194912 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 23:57:45.202833 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 23:57:45.202899 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 23:57:45.205588 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 23:57:45.205653 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 23:57:45.209849 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 23:57:45.216651 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 23:57:45.238073 systemd-networkd[791]: eth0: DHCPv6 lease lost Apr 13 23:57:45.245026 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 23:57:45.245155 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 23:57:45.252697 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 23:57:45.252905 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 23:57:45.259703 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 23:57:45.259774 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:57:45.277449 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 23:57:45.293416 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 23:57:45.293576 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 23:57:45.337999 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 23:57:45.338094 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:57:45.338575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 23:57:45.338642 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 23:57:45.358194 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 23:57:45.358352 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:57:45.366363 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:57:45.376850 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 23:57:45.377081 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 23:57:45.417570 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 23:57:45.417770 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:57:45.429841 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 23:57:45.429908 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 23:57:45.433388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 23:57:45.433439 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:57:45.433635 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 23:57:45.433688 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 23:57:45.462772 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 23:57:45.462860 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 23:57:45.465811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 23:57:45.465875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 23:57:45.477895 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 23:57:45.477992 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 23:57:45.561189 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 23:57:45.570213 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 23:57:45.570472 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:57:45.580355 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 23:57:45.580540 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:57:45.586325 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 23:57:45.586402 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:57:45.595348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:57:45.595435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:57:45.595889 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 23:57:45.596064 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 23:57:45.619417 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 23:57:45.619601 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 23:57:45.629735 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 23:57:45.641535 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 23:57:45.653281 systemd[1]: Switching root. Apr 13 23:57:45.768642 systemd-journald[195]: Journal stopped Apr 13 23:57:48.492427 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 13 23:57:48.492506 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 23:57:48.492534 kernel: SELinux: policy capability open_perms=1 Apr 13 23:57:48.492548 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 23:57:48.492561 kernel: SELinux: policy capability always_check_network=0 Apr 13 23:57:48.492577 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 23:57:48.492591 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 23:57:48.492603 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 23:57:48.492614 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 23:57:48.492629 kernel: audit: type=1403 audit(1776124666.215:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 23:57:48.492643 systemd[1]: Successfully loaded SELinux policy in 52.939ms. Apr 13 23:57:48.492668 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 59.127ms. Apr 13 23:57:48.492683 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 23:57:48.492696 systemd[1]: Detected virtualization kvm. Apr 13 23:57:48.492712 systemd[1]: Detected architecture x86-64. Apr 13 23:57:48.492725 systemd[1]: Detected first boot. Apr 13 23:57:48.492738 systemd[1]: Initializing machine ID from VM UUID. Apr 13 23:57:48.492751 zram_generator::config[1082]: No configuration found. Apr 13 23:57:48.492765 systemd[1]: Populated /etc with preset unit settings. Apr 13 23:57:48.492778 systemd[1]: Queued start job for default target multi-user.target. Apr 13 23:57:48.492791 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 13 23:57:48.492804 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 23:57:48.492820 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 23:57:48.492832 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 23:57:48.492844 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 23:57:48.492856 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 23:57:48.492869 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 23:57:48.492882 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 23:57:48.492894 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 23:57:48.492905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 23:57:48.492919 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 23:57:48.492933 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 23:57:48.492945 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 23:57:48.492958 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 23:57:48.493031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 23:57:48.493059 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 23:57:48.493072 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 23:57:48.493086 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 23:57:48.493098 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 23:57:48.493111 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 23:57:48.493369 systemd[1]: Reached target slices.target - Slice Units. Apr 13 23:57:48.493389 systemd[1]: Reached target swap.target - Swaps. Apr 13 23:57:48.493403 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 23:57:48.495883 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 23:57:48.495905 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 23:57:48.495919 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 23:57:48.495932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 23:57:48.495945 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 23:57:48.495962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 23:57:48.495998 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 23:57:48.496011 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 23:57:48.496026 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 23:57:48.496038 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 23:57:48.496051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:57:48.496063 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 23:57:48.496076 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 23:57:48.496092 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 23:57:48.496105 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 23:57:48.496118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:57:48.496130 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 23:57:48.496144 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 23:57:48.496156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:57:48.496170 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:57:48.496183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:57:48.496195 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 23:57:48.496209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:57:48.496222 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 23:57:48.496235 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 13 23:57:48.496248 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 13 23:57:48.496261 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 23:57:48.496273 kernel: loop: module loaded Apr 13 23:57:48.496316 kernel: fuse: init (API version 7.39) Apr 13 23:57:48.496330 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 23:57:48.496343 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 23:57:48.496359 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 23:57:48.496372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 23:57:48.496410 systemd-journald[1167]: Collecting audit messages is disabled. Apr 13 23:57:48.496436 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:57:48.496451 systemd-journald[1167]: Journal started Apr 13 23:57:48.496480 systemd-journald[1167]: Runtime Journal (/run/log/journal/a394912dfba54cd2b259ef30e9f5ac64) is 6.0M, max 48.3M, 42.2M free. Apr 13 23:57:48.502820 kernel: ACPI: bus type drm_connector registered Apr 13 23:57:48.508366 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 23:57:48.532705 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 23:57:48.539588 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 23:57:48.544853 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 23:57:48.550951 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 23:57:48.557755 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 23:57:48.567851 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 23:57:48.571496 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 23:57:48.576057 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 23:57:48.584563 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 23:57:48.584754 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 23:57:48.590744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:57:48.591464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:57:48.614943 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:57:48.615481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:57:48.621428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:57:48.622665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:57:48.628580 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 23:57:48.628780 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 23:57:48.635950 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:57:48.636542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:57:48.640788 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 23:57:48.651908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 23:57:48.665528 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 23:57:48.681466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 23:57:48.770544 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 23:57:48.792655 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 23:57:48.807207 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 23:57:48.812715 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 23:57:48.818808 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 23:57:48.830506 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 23:57:48.833631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:57:48.838605 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 23:57:48.841914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:57:48.846466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 23:57:48.850822 systemd-journald[1167]: Time spent on flushing to /var/log/journal/a394912dfba54cd2b259ef30e9f5ac64 is 87.365ms for 987 entries. Apr 13 23:57:48.850822 systemd-journald[1167]: System Journal (/var/log/journal/a394912dfba54cd2b259ef30e9f5ac64) is 8.0M, max 195.6M, 187.6M free. Apr 13 23:57:48.964433 systemd-journald[1167]: Received client request to flush runtime journal. Apr 13 23:57:48.851397 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 23:57:48.885780 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 23:57:48.939214 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 23:57:48.944031 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 23:57:48.948578 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 23:57:48.953852 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 23:57:48.964837 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 23:57:48.969786 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 23:57:48.973265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 23:57:48.994750 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Apr 13 23:57:48.994932 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Apr 13 23:57:49.011797 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 23:57:49.034676 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 23:57:49.093167 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 23:57:49.157089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 23:57:49.188590 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Apr 13 23:57:49.188658 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Apr 13 23:57:49.196751 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 23:57:50.085282 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 23:57:50.141559 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 23:57:50.182519 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Apr 13 23:57:50.276882 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 23:57:50.350795 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 23:57:50.381512 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 23:57:50.474688 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 13 23:57:50.492350 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1258) Apr 13 23:57:50.538622 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 23:57:50.755725 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 23:57:50.778596 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 13 23:57:50.801137 kernel: ACPI: button: Power Button [PWRF] Apr 13 23:57:50.836679 systemd-networkd[1260]: lo: Link UP Apr 13 23:57:50.836685 systemd-networkd[1260]: lo: Gained carrier Apr 13 23:57:50.838362 systemd-networkd[1260]: Enumeration completed Apr 13 23:57:50.838543 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 23:57:50.839054 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:57:50.839068 systemd-networkd[1260]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 23:57:50.843959 systemd-networkd[1260]: eth0: Link UP Apr 13 23:57:50.844218 systemd-networkd[1260]: eth0: Gained carrier Apr 13 23:57:50.844240 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 23:57:50.856695 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 23:57:50.867941 systemd-networkd[1260]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 13 23:57:50.935558 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 13 23:57:50.942948 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 23:57:50.943124 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 13 23:57:50.943144 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 23:57:50.943700 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 23:57:51.221279 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 23:57:51.227007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:57:51.233814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 23:57:51.234095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:57:51.264122 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 23:57:51.740603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 23:57:52.002321 systemd-networkd[1260]: eth0: Gained IPv6LL Apr 13 23:57:52.061696 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 23:57:52.319765 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 23:57:52.347864 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 23:57:52.408145 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:57:52.447342 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 23:57:52.454880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 23:57:52.493215 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 23:57:52.513463 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 23:57:52.641596 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 23:57:52.646879 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 23:57:52.652865 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 23:57:52.653133 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 23:57:52.656474 systemd[1]: Reached target machines.target - Containers. Apr 13 23:57:52.664277 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 23:57:52.688791 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 23:57:52.772219 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 23:57:52.775650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:57:52.782422 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 23:57:52.813875 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 23:57:52.860431 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 23:57:52.864362 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 23:57:52.875098 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 23:57:52.966636 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 23:57:52.982634 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 23:57:52.985814 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 23:57:53.072776 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 23:57:53.193661 kernel: loop1: detected capacity change from 0 to 228704 Apr 13 23:57:53.400912 kernel: loop2: detected capacity change from 0 to 140768 Apr 13 23:57:53.590376 kernel: loop3: detected capacity change from 0 to 142488 Apr 13 23:57:53.874753 kernel: loop4: detected capacity change from 0 to 228704 Apr 13 23:57:54.003742 kernel: loop5: detected capacity change from 0 to 140768 Apr 13 23:57:54.159483 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 13 23:57:54.160762 (sd-merge)[1321]: Merged extensions into '/usr'. Apr 13 23:57:54.172523 systemd[1]: Reloading requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 23:57:54.172716 systemd[1]: Reloading... Apr 13 23:57:54.613352 zram_generator::config[1345]: No configuration found. Apr 13 23:57:55.311106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:57:55.483772 ldconfig[1306]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 23:57:55.579869 systemd[1]: Reloading finished in 1406 ms. Apr 13 23:57:55.652272 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 23:57:55.660070 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 23:57:55.781395 systemd[1]: Starting ensure-sysext.service... Apr 13 23:57:55.786249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 23:57:55.863702 systemd[1]: Reloading requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Apr 13 23:57:55.863780 systemd[1]: Reloading... Apr 13 23:57:55.966158 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 23:57:55.966846 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 23:57:55.968740 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 23:57:55.970142 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Apr 13 23:57:55.971771 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Apr 13 23:57:55.978573 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:57:55.978581 systemd-tmpfiles[1393]: Skipping /boot Apr 13 23:57:56.051463 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 23:57:56.051475 systemd-tmpfiles[1393]: Skipping /boot Apr 13 23:57:56.079355 zram_generator::config[1424]: No configuration found. Apr 13 23:57:56.825841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:57:57.171792 systemd[1]: Reloading finished in 1283 ms. Apr 13 23:57:57.264563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 23:57:57.326559 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:57:57.342259 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 23:57:57.350688 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 23:57:57.428959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 23:57:57.444496 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 23:57:57.472094 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:57:57.472910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:57:57.492912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:57:57.525733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:57:57.649643 augenrules[1492]: No rules Apr 13 23:57:57.743232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:57:57.746157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:57:57.746664 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:57:57.752521 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:57:57.756946 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 23:57:57.769917 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 23:57:57.775257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:57:57.776618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:57:57.782696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:57:57.805661 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:57:57.815677 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:57:57.816664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:57:57.858171 systemd-resolved[1470]: Positive Trust Anchors: Apr 13 23:57:57.858189 systemd-resolved[1470]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 23:57:57.858222 systemd-resolved[1470]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 23:57:57.918572 systemd-resolved[1470]: Defaulting to hostname 'linux'. Apr 13 23:57:57.932680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 23:57:57.950594 systemd[1]: Reached target network.target - Network. Apr 13 23:57:57.958787 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 23:57:57.984846 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 23:57:57.989006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:57:57.991127 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:57:58.017111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:57:58.044143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:57:58.060072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:57:58.062817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:57:58.080559 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 23:57:58.083977 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 23:57:58.086052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:57:58.093395 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 23:57:58.136518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:57:58.138638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:57:58.143715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:57:58.144124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:57:58.149356 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:57:58.149544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:57:58.161168 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 23:57:58.292779 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:57:58.304846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 23:57:58.331893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 23:57:58.343374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 23:57:58.383526 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 23:57:58.462786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 23:57:58.468444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 23:57:58.469893 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 23:57:58.475275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 23:57:58.494821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 23:57:58.496953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 23:57:58.516123 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 23:57:58.516902 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 23:57:58.529909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 23:57:58.531158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 23:57:58.538121 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 23:57:58.538531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 23:57:58.570501 systemd[1]: Finished ensure-sysext.service. Apr 13 23:57:58.633809 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 23:57:58.636062 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 23:57:58.658642 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 23:57:59.062540 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 23:58:00.214941 systemd-resolved[1470]: Clock change detected. Flushing caches. Apr 13 23:58:00.214967 systemd-timesyncd[1538]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 13 23:58:00.215003 systemd-timesyncd[1538]: Initial clock synchronization to Mon 2026-04-13 23:58:00.214804 UTC. Apr 13 23:58:00.226805 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 23:58:00.241787 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 23:58:00.286722 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 23:58:00.292777 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 23:58:00.298454 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 23:58:00.301622 systemd[1]: Reached target paths.target - Path Units. Apr 13 23:58:00.317892 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 23:58:00.324285 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 23:58:00.340607 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 23:58:00.350714 systemd[1]: Reached target timers.target - Timer Units. Apr 13 23:58:00.359799 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 23:58:00.406323 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 23:58:00.427757 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 23:58:00.452945 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 23:58:00.457026 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 23:58:00.459813 systemd[1]: Reached target basic.target - Basic System. Apr 13 23:58:00.465783 systemd[1]: System is tainted: cgroupsv1 Apr 13 23:58:00.505077 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:58:00.506459 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 23:58:00.534036 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 23:58:00.597909 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 13 23:58:00.612751 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 23:58:00.620789 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 23:58:00.629077 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 23:58:00.633340 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 23:58:00.664465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:58:00.672514 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 23:58:00.677296 jq[1547]: false Apr 13 23:58:00.689808 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 23:58:00.693669 extend-filesystems[1548]: Found loop3 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found loop4 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found loop5 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found sr0 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found vda Apr 13 23:58:00.693669 extend-filesystems[1548]: Found vda1 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found vda2 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found vda3 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found usr Apr 13 23:58:00.693669 extend-filesystems[1548]: Found vda4 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found vda6 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found vda7 Apr 13 23:58:00.693669 extend-filesystems[1548]: Found vda9 Apr 13 23:58:00.693669 extend-filesystems[1548]: Checking size of /dev/vda9 Apr 13 23:58:00.718162 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 23:58:00.741810 extend-filesystems[1548]: Resized partition /dev/vda9 Apr 13 23:58:00.762665 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 23:58:00.772727 extend-filesystems[1566]: resize2fs 1.47.1 (20-May-2024) Apr 13 23:58:00.781147 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 13 23:58:00.781868 dbus-daemon[1546]: [system] SELinux support is enabled Apr 13 23:58:00.799422 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 23:58:00.832574 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 23:58:00.861859 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 23:58:00.862164 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1576) Apr 13 23:58:00.883901 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 23:58:00.930872 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 23:58:00.938868 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 23:58:00.967810 jq[1590]: true Apr 13 23:58:01.004424 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 13 23:58:01.016552 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 23:58:01.019433 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 23:58:01.025359 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 23:58:01.026539 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 23:58:01.031768 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 13 23:58:01.031768 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 13 23:58:01.031768 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 13 23:58:01.070043 extend-filesystems[1548]: Resized filesystem in /dev/vda9 Apr 13 23:58:01.077304 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 23:58:01.077578 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 23:58:01.083920 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 23:58:01.096767 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 23:58:01.099418 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 23:58:01.119045 update_engine[1588]: I20260413 23:58:01.118937 1588 main.cc:92] Flatcar Update Engine starting Apr 13 23:58:01.185729 (ntainerd)[1602]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 23:58:01.211935 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 13 23:58:01.215970 jq[1601]: true Apr 13 23:58:01.212813 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 13 23:58:01.226945 update_engine[1588]: I20260413 23:58:01.226744 1588 update_check_scheduler.cc:74] Next update check in 6m23s Apr 13 23:58:01.270472 systemd-logind[1582]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 23:58:01.270493 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 23:58:01.274293 systemd-logind[1582]: New seat seat0. Apr 13 23:58:01.291175 tar[1600]: linux-amd64/LICENSE Apr 13 23:58:01.291175 tar[1600]: linux-amd64/helm Apr 13 23:58:01.290961 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 23:58:01.332727 systemd[1]: Started update-engine.service - Update Engine. Apr 13 23:58:01.425451 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 23:58:01.426532 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 23:58:01.432702 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 23:58:01.438048 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 23:58:01.442710 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 23:58:01.454874 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 23:58:01.468804 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 23:58:01.596307 bash[1642]: Updated "/home/core/.ssh/authorized_keys" Apr 13 23:58:01.601129 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 23:58:01.708430 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 13 23:58:02.409928 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 23:58:02.428412 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 23:58:02.436881 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 23:58:02.596023 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 23:58:02.624469 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 23:58:02.784532 systemd[1]: Started sshd@0-10.0.0.40:22-10.0.0.1:34980.service - OpenSSH per-connection server daemon (10.0.0.1:34980). Apr 13 23:58:02.792370 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 23:58:02.794174 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 23:58:02.828485 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 23:58:03.093068 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 23:58:03.382337 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 23:58:03.424219 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 23:58:03.438047 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 23:58:03.577674 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 34980 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:58:03.585378 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:58:03.668529 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 23:58:03.836427 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 23:58:03.925629 systemd-logind[1582]: New session 1 of user core. Apr 13 23:58:03.967920 containerd[1602]: time="2026-04-13T23:58:03.966390035Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 23:58:03.973225 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 23:58:04.025248 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 23:58:04.108196 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 23:58:04.534347 containerd[1602]: time="2026-04-13T23:58:04.534274765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:58:04.587239 containerd[1602]: time="2026-04-13T23:58:04.585788416Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:58:04.587239 containerd[1602]: time="2026-04-13T23:58:04.586242029Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 23:58:04.588932 containerd[1602]: time="2026-04-13T23:58:04.586926755Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 23:58:04.595630 containerd[1602]: time="2026-04-13T23:58:04.594815115Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 23:58:04.600145 containerd[1602]: time="2026-04-13T23:58:04.596749592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 23:58:04.600145 containerd[1602]: time="2026-04-13T23:58:04.597822564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:58:04.601215 containerd[1602]: time="2026-04-13T23:58:04.600988887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:58:04.603213 containerd[1602]: time="2026-04-13T23:58:04.603080812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:58:04.603345 containerd[1602]: time="2026-04-13T23:58:04.603334248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 23:58:04.603421 containerd[1602]: time="2026-04-13T23:58:04.603408354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:58:04.603463 containerd[1602]: time="2026-04-13T23:58:04.603453173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 23:58:04.606318 containerd[1602]: time="2026-04-13T23:58:04.605152692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:58:04.610640 containerd[1602]: time="2026-04-13T23:58:04.610514970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 23:58:04.613637 containerd[1602]: time="2026-04-13T23:58:04.613535679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 23:58:04.617485 containerd[1602]: time="2026-04-13T23:58:04.615986338Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 23:58:04.621515 containerd[1602]: time="2026-04-13T23:58:04.620936140Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 23:58:04.623199 containerd[1602]: time="2026-04-13T23:58:04.623046833Z" level=info msg="metadata content store policy set" policy=shared Apr 13 23:58:04.643569 containerd[1602]: time="2026-04-13T23:58:04.643423056Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 23:58:04.643817 containerd[1602]: time="2026-04-13T23:58:04.643752388Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 23:58:04.643817 containerd[1602]: time="2026-04-13T23:58:04.643779671Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 23:58:04.643817 containerd[1602]: time="2026-04-13T23:58:04.643815608Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 23:58:04.643911 containerd[1602]: time="2026-04-13T23:58:04.643870413Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 23:58:04.660335 containerd[1602]: time="2026-04-13T23:58:04.647712115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 23:58:04.668624 containerd[1602]: time="2026-04-13T23:58:04.667712909Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 23:58:04.690212 containerd[1602]: time="2026-04-13T23:58:04.686499482Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 23:58:04.690212 containerd[1602]: time="2026-04-13T23:58:04.686810943Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 23:58:04.690212 containerd[1602]: time="2026-04-13T23:58:04.686831131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 23:58:04.690212 containerd[1602]: time="2026-04-13T23:58:04.687607612Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 23:58:04.690212 containerd[1602]: time="2026-04-13T23:58:04.689368342Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 23:58:04.772624 containerd[1602]: time="2026-04-13T23:58:04.772422714Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 23:58:04.773036 systemd[1682]: Queued start job for default target default.target. Apr 13 23:58:04.773552 containerd[1602]: time="2026-04-13T23:58:04.773251145Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 23:58:04.773552 containerd[1602]: time="2026-04-13T23:58:04.773446666Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 23:58:04.773552 containerd[1602]: time="2026-04-13T23:58:04.773473575Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 23:58:04.773552 containerd[1602]: time="2026-04-13T23:58:04.773527987Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 23:58:04.775159 containerd[1602]: time="2026-04-13T23:58:04.774041859Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.777349756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.777705740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.777750749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.778544129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.779835337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.780911246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.782220005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.783367772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.783471455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.783526792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.783538 containerd[1602]: time="2026-04-13T23:58:04.783542614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.784654 containerd[1602]: time="2026-04-13T23:58:04.783566499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.784654 containerd[1602]: time="2026-04-13T23:58:04.783608192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.785644 containerd[1602]: time="2026-04-13T23:58:04.785022692Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 23:58:04.787978 containerd[1602]: time="2026-04-13T23:58:04.787877046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.788247 containerd[1602]: time="2026-04-13T23:58:04.788062645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.788959 containerd[1602]: time="2026-04-13T23:58:04.788689372Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 23:58:04.792344 containerd[1602]: time="2026-04-13T23:58:04.792187179Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 23:58:04.792445 containerd[1602]: time="2026-04-13T23:58:04.792397754Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 23:58:04.792445 containerd[1602]: time="2026-04-13T23:58:04.792440253Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 23:58:04.794186 systemd[1682]: Created slice app.slice - User Application Slice. Apr 13 23:58:04.794355 containerd[1602]: time="2026-04-13T23:58:04.794243330Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 23:58:04.794355 containerd[1602]: time="2026-04-13T23:58:04.794313328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.794422 containerd[1602]: time="2026-04-13T23:58:04.794399074Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 23:58:04.794494 systemd[1682]: Reached target paths.target - Paths. Apr 13 23:58:04.794988 containerd[1602]: time="2026-04-13T23:58:04.794867362Z" level=info msg="NRI interface is disabled by configuration." Apr 13 23:58:04.795136 containerd[1602]: time="2026-04-13T23:58:04.795080901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 23:58:04.795232 systemd[1682]: Reached target timers.target - Timers. Apr 13 23:58:04.812330 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 23:58:04.813918 containerd[1602]: time="2026-04-13T23:58:04.812767261Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 23:58:04.835310 containerd[1602]: time="2026-04-13T23:58:04.828980685Z" level=info msg="Connect containerd service" Apr 13 23:58:04.835310 containerd[1602]: time="2026-04-13T23:58:04.829623927Z" level=info msg="using legacy CRI server" Apr 13 23:58:04.835310 containerd[1602]: time="2026-04-13T23:58:04.829641362Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 23:58:04.835472 containerd[1602]: time="2026-04-13T23:58:04.835350118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 23:58:04.908455 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 23:58:04.909616 systemd[1682]: Reached target sockets.target - Sockets. Apr 13 23:58:04.909636 systemd[1682]: Reached target basic.target - Basic System. Apr 13 23:58:04.912041 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 23:58:04.913323 systemd[1682]: Reached target default.target - Main User Target. Apr 13 23:58:04.913405 systemd[1682]: Startup finished in 762ms. Apr 13 23:58:04.922164 containerd[1602]: time="2026-04-13T23:58:04.917498870Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 23:58:04.928216 containerd[1602]: time="2026-04-13T23:58:04.923881411Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 23:58:04.928216 containerd[1602]: time="2026-04-13T23:58:04.925458823Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 23:58:04.936185 containerd[1602]: time="2026-04-13T23:58:04.927582805Z" level=info msg="Start subscribing containerd event" Apr 13 23:58:04.934660 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 23:58:04.957446 containerd[1602]: time="2026-04-13T23:58:04.957347337Z" level=info msg="Start recovering state" Apr 13 23:58:04.974759 containerd[1602]: time="2026-04-13T23:58:04.963729303Z" level=info msg="Start event monitor" Apr 13 23:58:05.005157 containerd[1602]: time="2026-04-13T23:58:05.003581044Z" level=info msg="Start snapshots syncer" Apr 13 23:58:05.009501 containerd[1602]: time="2026-04-13T23:58:05.007426288Z" level=info msg="Start cni network conf syncer for default" Apr 13 23:58:05.011787 containerd[1602]: time="2026-04-13T23:58:05.011604121Z" level=info msg="Start streaming server" Apr 13 23:58:05.019510 containerd[1602]: time="2026-04-13T23:58:05.019427200Z" level=info msg="containerd successfully booted in 1.068774s" Apr 13 23:58:05.019675 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 23:58:05.136474 systemd[1]: Started sshd@1-10.0.0.40:22-10.0.0.1:34990.service - OpenSSH per-connection server daemon (10.0.0.1:34990). Apr 13 23:58:05.337494 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 34990 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:58:05.354252 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:58:05.435300 systemd-logind[1582]: New session 2 of user core. Apr 13 23:58:05.469589 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 23:58:05.506155 tar[1600]: linux-amd64/README.md Apr 13 23:58:05.567059 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 23:58:05.755320 sshd[1697]: pam_unix(sshd:session): session closed for user core Apr 13 23:58:05.818726 systemd[1]: Started sshd@2-10.0.0.40:22-10.0.0.1:56384.service - OpenSSH per-connection server daemon (10.0.0.1:56384). Apr 13 23:58:05.836581 systemd[1]: sshd@1-10.0.0.40:22-10.0.0.1:34990.service: Deactivated successfully. Apr 13 23:58:05.880298 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 23:58:05.895773 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Apr 13 23:58:05.942537 systemd-logind[1582]: Removed session 2. Apr 13 23:58:06.049997 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 56384 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:58:06.060024 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:58:06.227134 systemd-logind[1582]: New session 3 of user core. Apr 13 23:58:06.269314 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 23:58:06.735703 sshd[1707]: pam_unix(sshd:session): session closed for user core Apr 13 23:58:06.768770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:58:06.774699 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:58:06.782141 systemd[1]: sshd@2-10.0.0.40:22-10.0.0.1:56384.service: Deactivated successfully. Apr 13 23:58:06.796988 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 23:58:06.802964 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Apr 13 23:58:06.805201 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 23:58:06.808817 systemd[1]: Startup finished in 13.277s (kernel) + 19.492s (userspace) = 32.770s. Apr 13 23:58:06.874901 systemd-logind[1582]: Removed session 3. Apr 13 23:58:12.620466 kubelet[1723]: E0413 23:58:12.613693 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:58:12.632940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:58:12.635931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:58:16.781511 systemd[1]: Started sshd@3-10.0.0.40:22-10.0.0.1:50004.service - OpenSSH per-connection server daemon (10.0.0.1:50004). Apr 13 23:58:17.041566 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 50004 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:58:17.078880 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:58:17.226399 systemd-logind[1582]: New session 4 of user core. Apr 13 23:58:17.249994 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 23:58:17.436361 sshd[1739]: pam_unix(sshd:session): session closed for user core Apr 13 23:58:17.484611 systemd[1]: Started sshd@4-10.0.0.40:22-10.0.0.1:50020.service - OpenSSH per-connection server daemon (10.0.0.1:50020). Apr 13 23:58:17.550582 systemd[1]: sshd@3-10.0.0.40:22-10.0.0.1:50004.service: Deactivated successfully. Apr 13 23:58:17.573691 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 23:58:17.631764 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Apr 13 23:58:17.687410 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 50020 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:58:17.693435 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:58:17.695660 systemd-logind[1582]: Removed session 4. Apr 13 23:58:17.808063 systemd-logind[1582]: New session 5 of user core. Apr 13 23:58:17.829304 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 23:58:17.956450 sshd[1744]: pam_unix(sshd:session): session closed for user core Apr 13 23:58:17.969667 systemd[1]: Started sshd@5-10.0.0.40:22-10.0.0.1:50030.service - OpenSSH per-connection server daemon (10.0.0.1:50030). Apr 13 23:58:17.972523 systemd[1]: sshd@4-10.0.0.40:22-10.0.0.1:50020.service: Deactivated successfully. Apr 13 23:58:17.991457 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 23:58:18.039761 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Apr 13 23:58:18.060617 systemd-logind[1582]: Removed session 5. Apr 13 23:58:18.102123 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 50030 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:58:18.107566 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:58:18.223484 systemd-logind[1582]: New session 6 of user core. Apr 13 23:58:18.263308 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 23:58:18.406570 sshd[1752]: pam_unix(sshd:session): session closed for user core Apr 13 23:58:18.460887 systemd[1]: sshd@5-10.0.0.40:22-10.0.0.1:50030.service: Deactivated successfully. Apr 13 23:58:18.479562 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 23:58:18.498395 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Apr 13 23:58:18.550728 systemd[1]: Started sshd@6-10.0.0.40:22-10.0.0.1:50040.service - OpenSSH per-connection server daemon (10.0.0.1:50040). Apr 13 23:58:18.592497 systemd-logind[1582]: Removed session 6. Apr 13 23:58:18.749566 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 50040 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:58:18.781826 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:58:18.883439 systemd-logind[1582]: New session 7 of user core. Apr 13 23:58:18.912694 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 23:58:19.132708 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 23:58:19.136419 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:58:19.241532 sudo[1767]: pam_unix(sudo:session): session closed for user root Apr 13 23:58:19.252885 sshd[1763]: pam_unix(sshd:session): session closed for user core Apr 13 23:58:19.292577 systemd[1]: Started sshd@7-10.0.0.40:22-10.0.0.1:50056.service - OpenSSH per-connection server daemon (10.0.0.1:50056). Apr 13 23:58:19.304639 systemd[1]: sshd@6-10.0.0.40:22-10.0.0.1:50040.service: Deactivated successfully. Apr 13 23:58:19.322709 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 23:58:19.337443 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Apr 13 23:58:19.401013 systemd-logind[1582]: Removed session 7. Apr 13 23:58:19.448549 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 50056 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:58:19.455694 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:58:19.588409 systemd-logind[1582]: New session 8 of user core. Apr 13 23:58:19.604170 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 23:58:19.710733 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 23:58:19.714309 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:58:19.774860 sudo[1777]: pam_unix(sudo:session): session closed for user root Apr 13 23:58:19.792684 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 23:58:19.796763 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:58:19.968433 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 23:58:19.970515 auditctl[1780]: No rules Apr 13 23:58:19.972978 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 23:58:19.974525 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 23:58:20.036825 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 23:58:20.148628 augenrules[1800]: No rules Apr 13 23:58:20.155851 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 23:58:20.169633 sudo[1776]: pam_unix(sudo:session): session closed for user root Apr 13 23:58:20.179546 sshd[1769]: pam_unix(sshd:session): session closed for user core Apr 13 23:58:20.214693 systemd[1]: Started sshd@8-10.0.0.40:22-10.0.0.1:50072.service - OpenSSH per-connection server daemon (10.0.0.1:50072). Apr 13 23:58:20.219517 systemd[1]: sshd@7-10.0.0.40:22-10.0.0.1:50056.service: Deactivated successfully. Apr 13 23:58:20.233641 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 23:58:20.267832 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Apr 13 23:58:20.283770 systemd-logind[1582]: Removed session 8. Apr 13 23:58:20.428110 sshd[1807]: Accepted publickey for core from 10.0.0.1 port 50072 ssh2: RSA SHA256:L16zK+ubCZNTurpOZzyaV2jyctPe8ubVYVI0iU3AHjQ Apr 13 23:58:20.432076 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 23:58:20.479742 systemd-logind[1582]: New session 9 of user core. Apr 13 23:58:20.500858 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 23:58:20.688580 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 23:58:20.688996 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 23:58:21.515901 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 23:58:21.520633 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 23:58:22.040639 dockerd[1831]: time="2026-04-13T23:58:22.040509152Z" level=info msg="Starting up" Apr 13 23:58:22.544522 dockerd[1831]: time="2026-04-13T23:58:22.541289677Z" level=info msg="Loading containers: start." Apr 13 23:58:22.912287 kernel: Initializing XFRM netlink socket Apr 13 23:58:22.918876 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 23:58:22.994348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:58:23.244274 systemd-networkd[1260]: docker0: Link UP Apr 13 23:58:23.343754 dockerd[1831]: time="2026-04-13T23:58:23.342825703Z" level=info msg="Loading containers: done." Apr 13 23:58:23.433654 dockerd[1831]: time="2026-04-13T23:58:23.432678330Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 23:58:23.433654 dockerd[1831]: time="2026-04-13T23:58:23.433163235Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 23:58:23.433654 dockerd[1831]: time="2026-04-13T23:58:23.433340418Z" level=info msg="Daemon has completed initialization" Apr 13 23:58:23.561260 dockerd[1831]: time="2026-04-13T23:58:23.559699086Z" level=info msg="API listen on /run/docker.sock" Apr 13 23:58:23.561344 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 23:58:23.939821 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:58:23.939966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:58:24.388183 kubelet[1989]: E0413 23:58:24.387806 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:58:24.394565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:58:24.395469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:58:25.689887 containerd[1602]: time="2026-04-13T23:58:25.689017671Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 23:58:28.881056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1895705029.mount: Deactivated successfully. Apr 13 23:58:34.485420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 23:58:34.506794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:58:35.007796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:58:35.008428 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:58:35.328403 kubelet[2069]: E0413 23:58:35.328231 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:58:35.332158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:58:35.332586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:58:36.385678 containerd[1602]: time="2026-04-13T23:58:36.382700929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:36.395840 containerd[1602]: time="2026-04-13T23:58:36.385597768Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 13 23:58:36.502567 containerd[1602]: time="2026-04-13T23:58:36.501018664Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:36.696220 containerd[1602]: time="2026-04-13T23:58:36.694975274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:36.782206 containerd[1602]: time="2026-04-13T23:58:36.780963752Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 11.088416472s" Apr 13 23:58:36.782745 containerd[1602]: time="2026-04-13T23:58:36.782534196Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 23:58:36.790805 containerd[1602]: time="2026-04-13T23:58:36.790646658Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 23:58:42.193949 containerd[1602]: time="2026-04-13T23:58:42.193824221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:42.195628 containerd[1602]: time="2026-04-13T23:58:42.195579792Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 13 23:58:42.294293 containerd[1602]: time="2026-04-13T23:58:42.290399378Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:42.587513 containerd[1602]: time="2026-04-13T23:58:42.587308444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:42.795562 containerd[1602]: time="2026-04-13T23:58:42.795385724Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 6.002710757s" Apr 13 23:58:42.795562 containerd[1602]: time="2026-04-13T23:58:42.795509797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 23:58:42.804947 containerd[1602]: time="2026-04-13T23:58:42.804380635Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 23:58:45.497509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 23:58:45.520549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:58:46.073696 update_engine[1588]: I20260413 23:58:46.064356 1588 update_attempter.cc:509] Updating boot flags... Apr 13 23:58:46.317158 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2100) Apr 13 23:58:46.733132 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2100) Apr 13 23:58:46.795860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:58:46.826084 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:58:49.761538 kubelet[2114]: E0413 23:58:49.759333 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:58:49.770621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:58:49.773401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:58:50.298531 containerd[1602]: time="2026-04-13T23:58:50.296793750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:50.300321 containerd[1602]: time="2026-04-13T23:58:50.299406052Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 13 23:58:50.398706 containerd[1602]: time="2026-04-13T23:58:50.398454079Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:50.472789 containerd[1602]: time="2026-04-13T23:58:50.472639359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:58:50.534168 containerd[1602]: time="2026-04-13T23:58:50.532032549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 7.727548411s" Apr 13 23:58:50.534860 containerd[1602]: time="2026-04-13T23:58:50.534536972Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 23:58:50.594132 containerd[1602]: time="2026-04-13T23:58:50.593652886Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 23:58:59.980682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 23:59:00.014322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:59:00.646916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:59:00.678079 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:59:01.396081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923791983.mount: Deactivated successfully. Apr 13 23:59:02.342485 kubelet[2140]: E0413 23:59:02.342213 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:59:02.372231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:59:02.373901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:59:05.875306 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1750390830 wd_nsec: 1750390955 Apr 13 23:59:07.317271 containerd[1602]: time="2026-04-13T23:59:07.314673148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:07.317271 containerd[1602]: time="2026-04-13T23:59:07.315143540Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 13 23:59:07.338179 containerd[1602]: time="2026-04-13T23:59:07.337927586Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:07.416168 containerd[1602]: time="2026-04-13T23:59:07.413728182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:07.481770 containerd[1602]: time="2026-04-13T23:59:07.481069339Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 16.884935484s" Apr 13 23:59:07.483355 containerd[1602]: time="2026-04-13T23:59:07.482179807Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 23:59:07.493843 containerd[1602]: time="2026-04-13T23:59:07.493726717Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 23:59:08.839780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342514283.mount: Deactivated successfully. Apr 13 23:59:12.493284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 13 23:59:12.550024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:59:14.227246 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:59:14.229585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:59:15.284759 kubelet[2217]: E0413 23:59:15.283333 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:59:15.289476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:59:15.290582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:59:16.719030 containerd[1602]: time="2026-04-13T23:59:16.717625918Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 13 23:59:16.719030 containerd[1602]: time="2026-04-13T23:59:16.720766083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:16.822442 containerd[1602]: time="2026-04-13T23:59:16.821793126Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:17.092214 containerd[1602]: time="2026-04-13T23:59:17.091336333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:17.138043 containerd[1602]: time="2026-04-13T23:59:17.135856442Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 9.641871526s" Apr 13 23:59:17.138043 containerd[1602]: time="2026-04-13T23:59:17.136363810Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 23:59:17.211750 containerd[1602]: time="2026-04-13T23:59:17.211344988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 23:59:19.399666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1173885092.mount: Deactivated successfully. Apr 13 23:59:19.521082 containerd[1602]: time="2026-04-13T23:59:19.513212000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:19.530224 containerd[1602]: time="2026-04-13T23:59:19.529649470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 13 23:59:19.681017 containerd[1602]: time="2026-04-13T23:59:19.679436181Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:19.873798 containerd[1602]: time="2026-04-13T23:59:19.871557348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:19.897742 containerd[1602]: time="2026-04-13T23:59:19.895227449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.682424126s" Apr 13 23:59:19.897742 containerd[1602]: time="2026-04-13T23:59:19.896026641Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 23:59:19.916932 containerd[1602]: time="2026-04-13T23:59:19.915911681Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 23:59:22.264056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109243139.mount: Deactivated successfully. Apr 13 23:59:25.459969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 13 23:59:25.488923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:59:26.166833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:59:26.169848 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:59:27.227125 kubelet[2256]: E0413 23:59:27.224946 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:59:27.245233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:59:27.245684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:59:35.960357 containerd[1602]: time="2026-04-13T23:59:35.957990133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:35.966822 containerd[1602]: time="2026-04-13T23:59:35.966340936Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 13 23:59:36.111768 containerd[1602]: time="2026-04-13T23:59:36.108936460Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:36.614903 containerd[1602]: time="2026-04-13T23:59:36.613922039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 23:59:36.694143 containerd[1602]: time="2026-04-13T23:59:36.693047866Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 16.77545748s" Apr 13 23:59:36.694747 containerd[1602]: time="2026-04-13T23:59:36.693910533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 23:59:37.492278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 13 23:59:37.519959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:59:38.158994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:59:38.161386 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:59:39.586000 kubelet[2335]: E0413 23:59:39.584657 2335 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:59:39.597301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:59:39.599782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:59:49.739798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 13 23:59:49.784574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:59:50.525179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:59:50.538695 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 23:59:51.710658 kubelet[2374]: E0413 23:59:51.709401 2374 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 23:59:51.718973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 23:59:51.719772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 23:59:52.345025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:59:52.380651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:59:52.662863 systemd[1]: Reloading requested from client PID 2393 ('systemctl') (unit session-9.scope)... Apr 13 23:59:52.662904 systemd[1]: Reloading... Apr 13 23:59:53.718560 zram_generator::config[2436]: No configuration found. Apr 13 23:59:54.701635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 23:59:55.712998 systemd[1]: Reloading finished in 3049 ms. Apr 13 23:59:56.043982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:59:56.068715 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 23:59:56.144754 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:59:56.203525 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 23:59:56.207496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:59:56.260777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 23:59:57.321440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 23:59:57.323548 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:00:00.037869 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:00:00.039406 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:00:00.039406 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:00:00.059981 kubelet[2499]: I0414 00:00:00.044345 2499 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:00:05.035140 kubelet[2499]: I0414 00:00:05.033055 2499 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 00:00:05.036931 kubelet[2499]: I0414 00:00:05.036669 2499 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:00:05.072378 kubelet[2499]: I0414 00:00:05.070801 2499 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:00:05.422634 kubelet[2499]: E0414 00:00:05.421761 2499 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:00:05.522306 kubelet[2499]: I0414 00:00:05.521047 2499 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:00:05.906733 kubelet[2499]: E0414 00:00:05.905500 2499 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:00:05.916894 kubelet[2499]: I0414 00:00:05.914351 2499 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 00:00:06.461520 kubelet[2499]: I0414 00:00:06.460875 2499 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 00:00:06.474168 kubelet[2499]: I0414 00:00:06.468834 2499 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:00:06.491559 kubelet[2499]: I0414 00:00:06.474615 2499 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 00:00:06.496505 kubelet[2499]: I0414 00:00:06.495443 2499 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:00:06.504637 kubelet[2499]: I0414 00:00:06.502999 2499 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 00:00:06.524625 kubelet[2499]: I0414 00:00:06.522604 2499 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:00:06.571516 kubelet[2499]: I0414 00:00:06.570795 2499 kubelet.go:480] "Attempting to sync node with API server" Apr 14 00:00:06.581241 kubelet[2499]: I0414 00:00:06.579505 2499 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:00:06.585197 kubelet[2499]: I0414 00:00:06.584993 2499 kubelet.go:386] "Adding apiserver pod source" Apr 14 00:00:06.585905 kubelet[2499]: I0414 00:00:06.585325 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:00:06.720641 kubelet[2499]: E0414 00:00:06.719069 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:00:06.720641 kubelet[2499]: E0414 00:00:06.719234 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:00:06.755128 kubelet[2499]: I0414 00:00:06.754737 2499 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:00:06.772256 kubelet[2499]: I0414 00:00:06.771891 2499 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:00:06.779762 kubelet[2499]: W0414 00:00:06.778189 2499 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 00:00:06.982873 kubelet[2499]: I0414 00:00:06.981700 2499 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 00:00:06.988849 kubelet[2499]: I0414 00:00:06.985528 2499 server.go:1289] "Started kubelet" Apr 14 00:00:06.988849 kubelet[2499]: I0414 00:00:06.987980 2499 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:00:06.998055 kubelet[2499]: I0414 00:00:06.990925 2499 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:00:07.026283 kubelet[2499]: I0414 00:00:07.001601 2499 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:00:07.017689 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Apr 14 00:00:07.030421 systemd[1]: logrotate.service: Deactivated successfully. Apr 14 00:00:07.088960 kubelet[2499]: I0414 00:00:07.086321 2499 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:00:07.106085 kubelet[2499]: E0414 00:00:07.100671 2499 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.40:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.40:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6101a0500735a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:00:06.983349082 +0000 UTC m=+9.618545353,LastTimestamp:2026-04-14 00:00:06.983349082 +0000 UTC m=+9.618545353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:00:07.115594 kubelet[2499]: I0414 00:00:07.112749 2499 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:00:07.116889 kubelet[2499]: I0414 00:00:07.115733 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:00:07.150032 kubelet[2499]: I0414 00:00:07.149748 2499 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 00:00:07.155036 kubelet[2499]: E0414 00:00:07.154956 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:07.181306 kubelet[2499]: I0414 00:00:07.178041 2499 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 00:00:07.217153 kubelet[2499]: E0414 00:00:07.200222 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="200ms" Apr 14 00:00:07.222167 kubelet[2499]: I0414 00:00:07.219163 2499 reconciler.go:26] "Reconciler: start to sync state" Apr 14 00:00:07.232142 kubelet[2499]: I0414 00:00:07.226882 2499 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:00:07.240393 kubelet[2499]: E0414 00:00:07.238474 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:00:07.241875 kubelet[2499]: I0414 00:00:07.234803 2499 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:00:07.261380 kubelet[2499]: E0414 00:00:07.261247 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:07.312530 kubelet[2499]: E0414 00:00:07.309645 2499 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:00:07.340389 kubelet[2499]: I0414 00:00:07.340082 2499 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:00:07.387551 kubelet[2499]: E0414 00:00:07.387207 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:07.396789 kubelet[2499]: I0414 00:00:07.395363 2499 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 00:00:07.411202 kubelet[2499]: I0414 00:00:07.408179 2499 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 00:00:07.411202 kubelet[2499]: I0414 00:00:07.409133 2499 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 00:00:07.411202 kubelet[2499]: I0414 00:00:07.409432 2499 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:00:07.411202 kubelet[2499]: I0414 00:00:07.409502 2499 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 00:00:07.411202 kubelet[2499]: E0414 00:00:07.409684 2499 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:00:07.506247 kubelet[2499]: E0414 00:00:07.500760 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="400ms" Apr 14 00:00:07.512016 kubelet[2499]: E0414 00:00:07.509935 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:07.512016 kubelet[2499]: E0414 00:00:07.510239 2499 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:00:07.520137 kubelet[2499]: E0414 00:00:07.519588 2499 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:00:07.526856 kubelet[2499]: E0414 00:00:07.525808 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:00:07.585510 kubelet[2499]: E0414 00:00:07.584061 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:00:07.615064 kubelet[2499]: E0414 00:00:07.612978 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:07.715408 kubelet[2499]: E0414 00:00:07.713543 2499 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:00:07.719945 kubelet[2499]: E0414 00:00:07.718069 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:07.748736 kubelet[2499]: E0414 00:00:07.748082 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:00:07.823466 kubelet[2499]: E0414 00:00:07.822297 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:07.914728 kubelet[2499]: E0414 00:00:07.914507 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="800ms" Apr 14 00:00:07.927449 kubelet[2499]: E0414 00:00:07.924796 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:08.038415 kubelet[2499]: E0414 00:00:08.036762 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:08.119789 kubelet[2499]: E0414 00:00:08.119177 2499 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:00:08.131999 kubelet[2499]: E0414 00:00:08.131489 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:00:08.142208 kubelet[2499]: E0414 00:00:08.141264 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:08.180841 kubelet[2499]: I0414 00:00:08.180272 2499 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:00:08.182717 kubelet[2499]: I0414 00:00:08.181888 2499 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:00:08.184554 kubelet[2499]: I0414 00:00:08.184022 2499 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:00:08.212415 kubelet[2499]: I0414 00:00:08.208861 2499 policy_none.go:49] "None policy: Start" Apr 14 00:00:08.214790 kubelet[2499]: I0414 00:00:08.213951 2499 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 00:00:08.220289 kubelet[2499]: I0414 00:00:08.217073 2499 state_mem.go:35] "Initializing new in-memory state store" Apr 14 00:00:08.290254 kubelet[2499]: E0414 00:00:08.289723 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:08.397424 kubelet[2499]: E0414 00:00:08.395606 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:00:08.414127 kubelet[2499]: E0414 00:00:08.410444 2499 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:00:08.414127 kubelet[2499]: I0414 00:00:08.413791 2499 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:00:08.415026 kubelet[2499]: I0414 00:00:08.414173 2499 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:00:08.430050 kubelet[2499]: I0414 00:00:08.429988 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:00:08.430050 kubelet[2499]: E0414 00:00:08.430084 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:00:08.570260 kubelet[2499]: E0414 00:00:08.570076 2499 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:00:08.571480 kubelet[2499]: E0414 00:00:08.570510 2499 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:00:08.740431 kubelet[2499]: E0414 00:00:08.740245 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="1.6s" Apr 14 00:00:08.748495 kubelet[2499]: I0414 00:00:08.745419 2499 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:00:08.780044 kubelet[2499]: E0414 00:00:08.778813 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Apr 14 00:00:09.030808 kubelet[2499]: I0414 00:00:09.029476 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97d6824ebc055ec4160426c562e53a82-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97d6824ebc055ec4160426c562e53a82\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:00:09.038813 kubelet[2499]: I0414 00:00:09.034830 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97d6824ebc055ec4160426c562e53a82-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97d6824ebc055ec4160426c562e53a82\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:00:09.040629 kubelet[2499]: I0414 00:00:09.040557 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97d6824ebc055ec4160426c562e53a82-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97d6824ebc055ec4160426c562e53a82\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:00:09.040759 kubelet[2499]: I0414 00:00:09.040514 2499 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:00:09.103827 kubelet[2499]: E0414 00:00:09.103681 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Apr 14 00:00:09.286495 kubelet[2499]: I0414 00:00:09.281516 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:00:09.309491 kubelet[2499]: I0414 00:00:09.306053 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:00:09.356835 kubelet[2499]: I0414 00:00:09.343323 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:00:09.367126 kubelet[2499]: I0414 00:00:09.364693 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:00:09.376060 kubelet[2499]: I0414 00:00:09.371868 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:00:09.419175 kubelet[2499]: E0414 00:00:09.412326 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:09.457165 kubelet[2499]: E0414 00:00:09.455948 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:09.483639 kubelet[2499]: I0414 00:00:09.483573 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:00:09.491501 containerd[1602]: time="2026-04-14T00:00:09.491228413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97d6824ebc055ec4160426c562e53a82,Namespace:kube-system,Attempt:0,}" Apr 14 00:00:09.569487 kubelet[2499]: E0414 00:00:09.569278 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:09.585174 kubelet[2499]: E0414 00:00:09.583388 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:09.596176 kubelet[2499]: I0414 00:00:09.595283 2499 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:00:09.598507 kubelet[2499]: E0414 00:00:09.598289 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Apr 14 00:00:09.599168 kubelet[2499]: E0414 00:00:09.599149 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:09.603068 containerd[1602]: time="2026-04-14T00:00:09.602755281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 14 00:00:09.604472 kubelet[2499]: E0414 00:00:09.604149 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:09.615862 containerd[1602]: time="2026-04-14T00:00:09.614278252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 14 00:00:10.200203 kubelet[2499]: E0414 00:00:10.199951 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:00:10.375340 kubelet[2499]: E0414 00:00:10.373863 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="3.2s" Apr 14 00:00:10.375340 kubelet[2499]: E0414 00:00:10.374800 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:00:10.485351 kubelet[2499]: I0414 00:00:10.484398 2499 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:00:10.495310 kubelet[2499]: E0414 00:00:10.494672 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Apr 14 00:00:10.583992 kubelet[2499]: E0414 00:00:10.581553 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:00:11.174546 kubelet[2499]: E0414 00:00:11.172345 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:00:11.492669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154041073.mount: Deactivated successfully. Apr 14 00:00:11.684147 containerd[1602]: time="2026-04-14T00:00:11.681727125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:00:11.690200 containerd[1602]: time="2026-04-14T00:00:11.688599744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 00:00:11.713160 containerd[1602]: time="2026-04-14T00:00:11.712256565Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:00:11.713160 containerd[1602]: time="2026-04-14T00:00:11.712489453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:00:11.713854 kubelet[2499]: E0414 00:00:11.713714 2499 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:00:11.740447 containerd[1602]: time="2026-04-14T00:00:11.734957234Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:00:11.997224 containerd[1602]: time="2026-04-14T00:00:11.996894824Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:00:12.224325 kubelet[2499]: I0414 00:00:12.216819 2499 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:00:12.326278 kubelet[2499]: E0414 00:00:12.321251 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Apr 14 00:00:12.329191 containerd[1602]: time="2026-04-14T00:00:12.328752997Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:00:12.408414 containerd[1602]: time="2026-04-14T00:00:12.408208452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.750961248s" Apr 14 00:00:12.429940 containerd[1602]: time="2026-04-14T00:00:12.427546038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.824009518s" Apr 14 00:00:12.429940 containerd[1602]: time="2026-04-14T00:00:12.428914451Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.937101659s" Apr 14 00:00:12.539904 containerd[1602]: time="2026-04-14T00:00:12.532766695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:00:13.500667 containerd[1602]: time="2026-04-14T00:00:13.488012597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:00:13.500667 containerd[1602]: time="2026-04-14T00:00:13.489566920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:00:13.500667 containerd[1602]: time="2026-04-14T00:00:13.489589210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:00:13.500667 containerd[1602]: time="2026-04-14T00:00:13.492328599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:00:13.516083 containerd[1602]: time="2026-04-14T00:00:13.515486345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:00:13.525117 containerd[1602]: time="2026-04-14T00:00:13.516050173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:00:13.525117 containerd[1602]: time="2026-04-14T00:00:13.516079545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:00:13.530002 containerd[1602]: time="2026-04-14T00:00:13.524959110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:00:13.530002 containerd[1602]: time="2026-04-14T00:00:13.529306287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:00:13.530002 containerd[1602]: time="2026-04-14T00:00:13.529392385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:00:13.530002 containerd[1602]: time="2026-04-14T00:00:13.529409279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:00:13.530002 containerd[1602]: time="2026-04-14T00:00:13.529613330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:00:13.684980 kubelet[2499]: E0414 00:00:13.682871 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="6.4s" Apr 14 00:00:13.865349 kubelet[2499]: E0414 00:00:13.853636 2499 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.40:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.40:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6101a0500735a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:00:06.983349082 +0000 UTC m=+9.618545353,LastTimestamp:2026-04-14 00:00:06.983349082 +0000 UTC m=+9.618545353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:00:14.005212 kubelet[2499]: E0414 00:00:14.004610 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:00:14.023754 containerd[1602]: time="2026-04-14T00:00:14.023535949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"1413a5fc53856a5e5a937115cf63ccb944846f960d1542b641301c694adf78ed\"" Apr 14 00:00:14.027244 containerd[1602]: time="2026-04-14T00:00:14.026942100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97d6824ebc055ec4160426c562e53a82,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4222090966e77afb5d24b2431fa4b08b07f97634bae7c7f14b65f3249ad4254\"" Apr 14 00:00:14.027244 containerd[1602]: time="2026-04-14T00:00:14.027203437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a744d963bee931f372cb869fd7b0af8c2032a2cb274d27cf5f4a4b4690574f49\"" Apr 14 00:00:14.077985 kubelet[2499]: E0414 00:00:14.077536 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:14.081557 kubelet[2499]: E0414 00:00:14.077447 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:14.086977 kubelet[2499]: E0414 00:00:14.083523 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:14.320938 containerd[1602]: time="2026-04-14T00:00:14.318980323Z" level=info msg="CreateContainer within sandbox \"e4222090966e77afb5d24b2431fa4b08b07f97634bae7c7f14b65f3249ad4254\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 00:00:14.320938 containerd[1602]: time="2026-04-14T00:00:14.320152621Z" level=info msg="CreateContainer within sandbox \"a744d963bee931f372cb869fd7b0af8c2032a2cb274d27cf5f4a4b4690574f49\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 00:00:14.320938 containerd[1602]: time="2026-04-14T00:00:14.320626955Z" level=info msg="CreateContainer within sandbox \"1413a5fc53856a5e5a937115cf63ccb944846f960d1542b641301c694adf78ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 00:00:14.490714 kubelet[2499]: E0414 00:00:14.490612 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:00:14.589243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413356163.mount: Deactivated successfully. Apr 14 00:00:14.674541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556839073.mount: Deactivated successfully. Apr 14 00:00:14.813306 containerd[1602]: time="2026-04-14T00:00:14.812947161Z" level=info msg="CreateContainer within sandbox \"a744d963bee931f372cb869fd7b0af8c2032a2cb274d27cf5f4a4b4690574f49\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2ce48a6a211ccdeb476711a4c7177a6c785bb8406f9ac9bb66211195fb2f3f71\"" Apr 14 00:00:14.849818 containerd[1602]: time="2026-04-14T00:00:14.849400727Z" level=info msg="CreateContainer within sandbox \"e4222090966e77afb5d24b2431fa4b08b07f97634bae7c7f14b65f3249ad4254\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5176d00bcd4974a4ebe7b3ee5b4f00aec11da8bb4db41b5f2f62efcadc5195f5\"" Apr 14 00:00:14.873314 containerd[1602]: time="2026-04-14T00:00:14.871606996Z" level=info msg="CreateContainer within sandbox \"1413a5fc53856a5e5a937115cf63ccb944846f960d1542b641301c694adf78ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4673660d7e984778fdbeccf1e48aa4e6cb4fda536c9a8ff565ca28cf107a738\"" Apr 14 00:00:14.883940 containerd[1602]: time="2026-04-14T00:00:14.883801321Z" level=info msg="StartContainer for \"5176d00bcd4974a4ebe7b3ee5b4f00aec11da8bb4db41b5f2f62efcadc5195f5\"" Apr 14 00:00:14.886509 containerd[1602]: time="2026-04-14T00:00:14.883837421Z" level=info msg="StartContainer for \"2ce48a6a211ccdeb476711a4c7177a6c785bb8406f9ac9bb66211195fb2f3f71\"" Apr 14 00:00:14.928876 containerd[1602]: time="2026-04-14T00:00:14.928692592Z" level=info msg="StartContainer for \"a4673660d7e984778fdbeccf1e48aa4e6cb4fda536c9a8ff565ca28cf107a738\"" Apr 14 00:00:14.992046 kubelet[2499]: E0414 00:00:14.990877 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:00:15.817361 kubelet[2499]: I0414 00:00:15.816635 2499 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:00:15.926028 containerd[1602]: time="2026-04-14T00:00:15.904564780Z" level=info msg="StartContainer for \"5176d00bcd4974a4ebe7b3ee5b4f00aec11da8bb4db41b5f2f62efcadc5195f5\" returns successfully" Apr 14 00:00:16.124186 containerd[1602]: time="2026-04-14T00:00:16.122519497Z" level=info msg="StartContainer for \"a4673660d7e984778fdbeccf1e48aa4e6cb4fda536c9a8ff565ca28cf107a738\" returns successfully" Apr 14 00:00:16.501716 containerd[1602]: time="2026-04-14T00:00:16.428816046Z" level=info msg="StartContainer for \"2ce48a6a211ccdeb476711a4c7177a6c785bb8406f9ac9bb66211195fb2f3f71\" returns successfully" Apr 14 00:00:18.790123 kubelet[2499]: E0414 00:00:18.784602 2499 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:00:19.272755 kubelet[2499]: E0414 00:00:19.266443 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:19.372067 kubelet[2499]: E0414 00:00:19.365833 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:19.719140 kubelet[2499]: E0414 00:00:19.713478 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:19.781893 kubelet[2499]: E0414 00:00:19.742894 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:20.898384 kubelet[2499]: E0414 00:00:20.897357 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:20.900032 kubelet[2499]: E0414 00:00:20.899970 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:20.902169 kubelet[2499]: E0414 00:00:20.900910 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:20.902169 kubelet[2499]: E0414 00:00:20.901207 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:21.024907 kubelet[2499]: E0414 00:00:21.023236 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:21.036265 kubelet[2499]: E0414 00:00:21.030261 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:22.196833 kubelet[2499]: E0414 00:00:22.196644 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:22.203177 kubelet[2499]: E0414 00:00:22.199998 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:22.219260 kubelet[2499]: E0414 00:00:22.216889 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:22.219260 kubelet[2499]: E0414 00:00:22.218253 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:22.221879 kubelet[2499]: E0414 00:00:22.221437 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:22.225838 kubelet[2499]: E0414 00:00:22.224860 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:25.987165 kubelet[2499]: E0414 00:00:25.948396 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:00:26.836656 kubelet[2499]: E0414 00:00:26.836344 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:00:28.226174 kubelet[2499]: E0414 00:00:28.223787 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:28.305917 kubelet[2499]: E0414 00:00:28.304605 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:28.853525 kubelet[2499]: E0414 00:00:28.852974 2499 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:00:29.792751 kubelet[2499]: E0414 00:00:29.790658 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:29.809933 kubelet[2499]: E0414 00:00:29.805797 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:30.148072 kubelet[2499]: E0414 00:00:30.144937 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 14 00:00:30.576419 kubelet[2499]: E0414 00:00:30.574650 2499 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:00:31.222182 kubelet[2499]: E0414 00:00:31.220999 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:00:32.458246 kubelet[2499]: I0414 00:00:32.457835 2499 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:00:33.973835 kubelet[2499]: E0414 00:00:33.971997 2499 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.40:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6101a0500735a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:00:06.983349082 +0000 UTC m=+9.618545353,LastTimestamp:2026-04-14 00:00:06.983349082 +0000 UTC m=+9.618545353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:00:34.691650 kubelet[2499]: E0414 00:00:34.691162 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:00:37.103467 kubelet[2499]: E0414 00:00:37.100485 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:00:38.871222 kubelet[2499]: E0414 00:00:38.870882 2499 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:00:42.760778 kubelet[2499]: E0414 00:00:42.760380 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 00:00:42.891409 kubelet[2499]: E0414 00:00:42.887184 2499 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:00:43.007305 kubelet[2499]: E0414 00:00:42.988660 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:47.211948 kubelet[2499]: E0414 00:00:47.210388 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 00:00:48.357249 kubelet[2499]: E0414 00:00:48.354439 2499 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:00:48.903000 kubelet[2499]: E0414 00:00:48.900284 2499 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:00:49.414307 kubelet[2499]: I0414 00:00:49.413811 2499 apiserver.go:52] "Watching apiserver" Apr 14 00:00:50.023671 kubelet[2499]: I0414 00:00:50.023497 2499 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 00:00:50.024341 kubelet[2499]: I0414 00:00:50.023930 2499 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:00:50.488219 kubelet[2499]: E0414 00:00:50.481845 2499 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6101a0500735a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:00:06.983349082 +0000 UTC m=+9.618545353,LastTimestamp:2026-04-14 00:00:06.983349082 +0000 UTC m=+9.618545353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:00:50.848449 kubelet[2499]: I0414 00:00:50.840929 2499 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:00:50.856988 kubelet[2499]: E0414 00:00:50.855609 2499 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 00:00:50.989384 kubelet[2499]: I0414 00:00:50.986656 2499 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:00:52.713247 kubelet[2499]: E0414 00:00:52.701617 2499 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6101a0d307863 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:00:07.120713827 +0000 UTC m=+9.755910089,LastTimestamp:2026-04-14 00:00:07.120713827 +0000 UTC m=+9.755910089,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:00:52.924168 kubelet[2499]: I0414 00:00:52.910040 2499 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:00:53.069888 kubelet[2499]: E0414 00:00:53.062863 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:53.203271 kubelet[2499]: I0414 00:00:53.202345 2499 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:00:53.267238 kubelet[2499]: E0414 00:00:53.251753 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:54.775424 kubelet[2499]: E0414 00:00:54.741895 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:00:59.694831 kubelet[2499]: I0414 00:00:59.641790 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.62977448 podStartE2EDuration="7.62977448s" podCreationTimestamp="2026-04-14 00:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:00:58.789006191 +0000 UTC m=+61.424202464" watchObservedRunningTime="2026-04-14 00:00:59.62977448 +0000 UTC m=+62.264970749" Apr 14 00:00:59.705253 kubelet[2499]: I0414 00:00:59.698278 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.697560447 podStartE2EDuration="6.697560447s" podCreationTimestamp="2026-04-14 00:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:00:59.697149114 +0000 UTC m=+62.332345382" watchObservedRunningTime="2026-04-14 00:00:59.697560447 +0000 UTC m=+62.332756728" Apr 14 00:01:52.421054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ce48a6a211ccdeb476711a4c7177a6c785bb8406f9ac9bb66211195fb2f3f71-rootfs.mount: Deactivated successfully. Apr 14 00:01:52.682608 containerd[1602]: time="2026-04-14T00:01:52.671975317Z" level=info msg="shim disconnected" id=2ce48a6a211ccdeb476711a4c7177a6c785bb8406f9ac9bb66211195fb2f3f71 namespace=k8s.io Apr 14 00:01:52.691259 containerd[1602]: time="2026-04-14T00:01:52.686400814Z" level=warning msg="cleaning up after shim disconnected" id=2ce48a6a211ccdeb476711a4c7177a6c785bb8406f9ac9bb66211195fb2f3f71 namespace=k8s.io Apr 14 00:01:52.693531 containerd[1602]: time="2026-04-14T00:01:52.691424581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:01:55.670529 kubelet[2499]: I0414 00:01:55.667836 2499 scope.go:117] "RemoveContainer" containerID="2ce48a6a211ccdeb476711a4c7177a6c785bb8406f9ac9bb66211195fb2f3f71" Apr 14 00:01:55.696586 kubelet[2499]: E0414 00:01:55.696232 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:01:56.589268 containerd[1602]: time="2026-04-14T00:01:56.583064851Z" level=info msg="CreateContainer within sandbox \"a744d963bee931f372cb869fd7b0af8c2032a2cb274d27cf5f4a4b4690574f49\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 14 00:01:57.025834 kubelet[2499]: I0414 00:01:57.019486 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=64.018524982 podStartE2EDuration="1m4.018524982s" podCreationTimestamp="2026-04-14 00:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:01:00.669037202 +0000 UTC m=+63.304233473" watchObservedRunningTime="2026-04-14 00:01:57.018524982 +0000 UTC m=+119.653721253" Apr 14 00:01:57.565785 containerd[1602]: time="2026-04-14T00:01:57.562576031Z" level=info msg="CreateContainer within sandbox \"a744d963bee931f372cb869fd7b0af8c2032a2cb274d27cf5f4a4b4690574f49\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4aab32ba02185385451d85ccaa561401b1bdfca5a08499a55631b4ef5d812705\"" Apr 14 00:01:57.658639 containerd[1602]: time="2026-04-14T00:01:57.648512167Z" level=info msg="StartContainer for \"4aab32ba02185385451d85ccaa561401b1bdfca5a08499a55631b4ef5d812705\"" Apr 14 00:02:00.299182 containerd[1602]: time="2026-04-14T00:02:00.288708791Z" level=info msg="StartContainer for \"4aab32ba02185385451d85ccaa561401b1bdfca5a08499a55631b4ef5d812705\" returns successfully" Apr 14 00:02:01.973267 kubelet[2499]: E0414 00:02:01.965469 2499 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.423s" Apr 14 00:02:03.353173 kubelet[2499]: E0414 00:02:03.346938 2499 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.226s" Apr 14 00:02:04.929891 kubelet[2499]: E0414 00:02:04.917682 2499 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.494s" Apr 14 00:02:05.587274 kubelet[2499]: E0414 00:02:05.586022 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:06.354258 kubelet[2499]: E0414 00:02:06.336448 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:07.582811 kubelet[2499]: E0414 00:02:07.581083 2499 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 14 00:02:08.974963 kubelet[2499]: E0414 00:02:08.937799 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:09.020822 kubelet[2499]: E0414 00:02:09.020075 2499 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:09.110148 kubelet[2499]: E0414 00:02:09.107683 2499 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.561s" Apr 14 00:02:10.134227 kubelet[2499]: E0414 00:02:10.131981 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:14.103234 kubelet[2499]: E0414 00:02:14.097683 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:14.217147 kubelet[2499]: E0414 00:02:14.216945 2499 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:18.562036 kubelet[2499]: E0414 00:02:18.561023 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:19.399834 kubelet[2499]: E0414 00:02:19.397399 2499 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:20.523490 kubelet[2499]: E0414 00:02:20.522431 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:24.658019 kubelet[2499]: E0414 00:02:24.655513 2499 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:29.001366 kubelet[2499]: E0414 00:02:29.000758 2499 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.366s" Apr 14 00:02:29.818077 kubelet[2499]: E0414 00:02:29.813336 2499 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:30.911143 kubelet[2499]: E0414 00:02:30.909045 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:32.589011 kubelet[2499]: E0414 00:02:32.586720 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:02:33.236957 systemd[1]: Reloading requested from client PID 2865 ('systemctl') (unit session-9.scope)... Apr 14 00:02:33.237142 systemd[1]: Reloading... Apr 14 00:02:35.480766 kubelet[2499]: E0414 00:02:35.413979 2499 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 00:02:35.526517 zram_generator::config[2904]: No configuration found. Apr 14 00:02:37.530653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:02:39.308076 systemd[1]: Reloading finished in 6065 ms. Apr 14 00:02:40.018188 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:02:40.209903 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 00:02:40.219480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:02:40.426964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:02:42.279939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:02:42.512491 (kubelet)[2958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:02:47.819906 kubelet[2958]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:02:47.821998 kubelet[2958]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:02:47.821998 kubelet[2958]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:02:47.830622 kubelet[2958]: I0414 00:02:47.825004 2958 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:02:48.524447 kubelet[2958]: I0414 00:02:48.518543 2958 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 00:02:48.536198 kubelet[2958]: I0414 00:02:48.532387 2958 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:02:48.641475 kubelet[2958]: I0414 00:02:48.641210 2958 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:02:49.021623 kubelet[2958]: I0414 00:02:49.017264 2958 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 00:02:50.040970 kubelet[2958]: I0414 00:02:50.028822 2958 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:02:50.805367 kubelet[2958]: E0414 00:02:50.789265 2958 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:02:50.818911 kubelet[2958]: I0414 00:02:50.813739 2958 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 00:02:51.406903 kubelet[2958]: I0414 00:02:51.406765 2958 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 00:02:51.490224 kubelet[2958]: I0414 00:02:51.428519 2958 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:02:51.522994 kubelet[2958]: I0414 00:02:51.498261 2958 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 00:02:51.522994 kubelet[2958]: I0414 00:02:51.517632 2958 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:02:51.541060 kubelet[2958]: I0414 00:02:51.523818 2958 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 00:02:51.541060 kubelet[2958]: I0414 00:02:51.533649 2958 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:02:51.557930 kubelet[2958]: I0414 00:02:51.555613 2958 kubelet.go:480] "Attempting to sync node with API server" Apr 14 00:02:51.557930 kubelet[2958]: I0414 00:02:51.557561 2958 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:02:51.563036 kubelet[2958]: I0414 00:02:51.559429 2958 kubelet.go:386] "Adding apiserver pod source" Apr 14 00:02:51.567792 kubelet[2958]: I0414 00:02:51.565934 2958 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:02:51.939822 kubelet[2958]: I0414 00:02:51.939634 2958 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:02:51.996915 kubelet[2958]: I0414 00:02:51.996865 2958 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:02:52.320385 kubelet[2958]: I0414 00:02:52.320251 2958 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 00:02:52.322386 kubelet[2958]: I0414 00:02:52.322059 2958 server.go:1289] "Started kubelet" Apr 14 00:02:52.359280 kubelet[2958]: I0414 00:02:52.332364 2958 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:02:52.359280 kubelet[2958]: I0414 00:02:52.328735 2958 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:02:52.429612 kubelet[2958]: I0414 00:02:52.429420 2958 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:02:52.705400 kubelet[2958]: I0414 00:02:52.695731 2958 apiserver.go:52] "Watching apiserver" Apr 14 00:02:52.720616 kubelet[2958]: I0414 00:02:52.711772 2958 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:02:52.730606 kubelet[2958]: I0414 00:02:52.728843 2958 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:02:52.854583 kubelet[2958]: I0414 00:02:52.843010 2958 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 00:02:52.883363 kubelet[2958]: I0414 00:02:52.876662 2958 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 00:02:52.886475 kubelet[2958]: I0414 00:02:52.883862 2958 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:02:52.942172 kubelet[2958]: I0414 00:02:52.935547 2958 reconciler.go:26] "Reconciler: start to sync state" Apr 14 00:02:53.332809 kubelet[2958]: I0414 00:02:53.331085 2958 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:02:53.396371 kubelet[2958]: I0414 00:02:53.389045 2958 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:02:53.599194 kubelet[2958]: I0414 00:02:53.596587 2958 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:02:53.968906 kubelet[2958]: I0414 00:02:53.967190 2958 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 00:02:53.969461 kubelet[2958]: E0414 00:02:53.969271 2958 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:02:53.977654 kubelet[2958]: I0414 00:02:53.974125 2958 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 00:02:53.977654 kubelet[2958]: I0414 00:02:53.974594 2958 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 00:02:53.977654 kubelet[2958]: I0414 00:02:53.974956 2958 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:02:54.006044 kubelet[2958]: I0414 00:02:53.987041 2958 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 00:02:54.019490 kubelet[2958]: E0414 00:02:54.006790 2958 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:02:54.390954 kubelet[2958]: E0414 00:02:54.388830 2958 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:02:54.734569 kubelet[2958]: E0414 00:02:54.641951 2958 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:02:55.226676 kubelet[2958]: E0414 00:02:55.225208 2958 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:02:56.094480 kubelet[2958]: E0414 00:02:56.081636 2958 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:02:57.719284 kubelet[2958]: E0414 00:02:57.716425 2958 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:03:01.004941 kubelet[2958]: E0414 00:03:00.972245 2958 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:03:02.528192 kubelet[2958]: I0414 00:03:02.524474 2958 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:03:02.528192 kubelet[2958]: I0414 00:03:02.524590 2958 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:03:02.528192 kubelet[2958]: I0414 00:03:02.524626 2958 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:03:02.542924 kubelet[2958]: I0414 00:03:02.537311 2958 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 00:03:02.542924 kubelet[2958]: I0414 00:03:02.537444 2958 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 00:03:02.542924 kubelet[2958]: I0414 00:03:02.537626 2958 policy_none.go:49] "None policy: Start" Apr 14 00:03:02.542924 kubelet[2958]: I0414 00:03:02.537694 2958 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 00:03:02.542924 kubelet[2958]: I0414 00:03:02.537719 2958 state_mem.go:35] "Initializing new in-memory state store" Apr 14 00:03:02.556624 kubelet[2958]: I0414 00:03:02.555982 2958 state_mem.go:75] "Updated machine memory state" Apr 14 00:03:02.729792 kubelet[2958]: E0414 00:03:02.726353 2958 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:03:02.789330 kubelet[2958]: I0414 00:03:02.783399 2958 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:03:02.800445 kubelet[2958]: I0414 00:03:02.795182 2958 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:03:03.398185 kubelet[2958]: I0414 00:03:03.383428 2958 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:03:03.419892 kubelet[2958]: E0414 00:03:03.419528 2958 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:03:04.995980 kubelet[2958]: I0414 00:03:04.995237 2958 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:03:06.171579 kubelet[2958]: I0414 00:03:06.169897 2958 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 00:03:06.212629 kubelet[2958]: I0414 00:03:06.203750 2958 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:03:06.224332 kubelet[2958]: I0414 00:03:06.219807 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97d6824ebc055ec4160426c562e53a82-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97d6824ebc055ec4160426c562e53a82\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:03:06.227466 kubelet[2958]: I0414 00:03:06.225909 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97d6824ebc055ec4160426c562e53a82-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97d6824ebc055ec4160426c562e53a82\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:03:06.231537 kubelet[2958]: I0414 00:03:06.228985 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97d6824ebc055ec4160426c562e53a82-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97d6824ebc055ec4160426c562e53a82\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:03:06.305172 kubelet[2958]: I0414 00:03:06.303563 2958 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:03:06.393140 kubelet[2958]: I0414 00:03:06.387340 2958 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 00:03:06.507553 kubelet[2958]: I0414 00:03:06.505522 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:03:06.542440 kubelet[2958]: I0414 00:03:06.539958 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:03:06.640377 kubelet[2958]: I0414 00:03:06.564727 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:03:06.670538 kubelet[2958]: I0414 00:03:06.640512 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:03:06.686269 kubelet[2958]: I0414 00:03:06.683015 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:03:06.733206 kubelet[2958]: I0414 00:03:06.723005 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:03:07.036309 kubelet[2958]: E0414 00:03:07.024409 2958 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 00:03:07.036309 kubelet[2958]: E0414 00:03:07.024841 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:07.405640 kubelet[2958]: E0414 00:03:07.404394 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:07.521567 kubelet[2958]: E0414 00:03:07.520768 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:08.607538 kubelet[2958]: E0414 00:03:08.607469 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:08.608051 kubelet[2958]: E0414 00:03:08.607724 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:08.608051 kubelet[2958]: E0414 00:03:08.607858 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:10.089646 kubelet[2958]: E0414 00:03:10.089410 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:13.946414 kubelet[2958]: E0414 00:03:13.931578 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.923s" Apr 14 00:03:19.970304 kubelet[2958]: I0414 00:03:19.968864 2958 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 00:03:20.590224 containerd[1602]: time="2026-04-14T00:03:20.588559651Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 00:03:20.905266 kubelet[2958]: I0414 00:03:20.904513 2958 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 00:03:21.472776 kubelet[2958]: E0414 00:03:21.462288 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.444s" Apr 14 00:03:22.800309 kubelet[2958]: E0414 00:03:22.791808 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:23.332263 kubelet[2958]: E0414 00:03:23.315568 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.195s" Apr 14 00:03:23.600031 kubelet[2958]: E0414 00:03:23.579757 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:23.999472 kubelet[2958]: E0414 00:03:23.992776 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:24.390865 kubelet[2958]: I0414 00:03:24.379769 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8eacea06-8002-4c81-8c50-616dc1666389-kube-proxy\") pod \"kube-proxy-bk6dq\" (UID: \"8eacea06-8002-4c81-8c50-616dc1666389\") " pod="kube-system/kube-proxy-bk6dq" Apr 14 00:03:24.697414 kubelet[2958]: I0414 00:03:24.688665 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eacea06-8002-4c81-8c50-616dc1666389-xtables-lock\") pod \"kube-proxy-bk6dq\" (UID: \"8eacea06-8002-4c81-8c50-616dc1666389\") " pod="kube-system/kube-proxy-bk6dq" Apr 14 00:03:25.664049 kubelet[2958]: I0414 00:03:25.543873 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eacea06-8002-4c81-8c50-616dc1666389-lib-modules\") pod \"kube-proxy-bk6dq\" (UID: \"8eacea06-8002-4c81-8c50-616dc1666389\") " pod="kube-system/kube-proxy-bk6dq" Apr 14 00:03:25.694848 kubelet[2958]: I0414 00:03:25.677833 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8446\" (UniqueName: \"kubernetes.io/projected/8eacea06-8002-4c81-8c50-616dc1666389-kube-api-access-x8446\") pod \"kube-proxy-bk6dq\" (UID: \"8eacea06-8002-4c81-8c50-616dc1666389\") " pod="kube-system/kube-proxy-bk6dq" Apr 14 00:03:27.084578 kubelet[2958]: E0414 00:03:27.084328 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.091s" Apr 14 00:03:28.379580 kubelet[2958]: E0414 00:03:28.379391 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:28.404576 kubelet[2958]: E0414 00:03:28.404321 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:28.653730 containerd[1602]: time="2026-04-14T00:03:28.648500817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bk6dq,Uid:8eacea06-8002-4c81-8c50-616dc1666389,Namespace:kube-system,Attempt:0,}" Apr 14 00:03:31.383923 containerd[1602]: time="2026-04-14T00:03:31.367213808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:03:31.383923 containerd[1602]: time="2026-04-14T00:03:31.378583979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:03:31.383923 containerd[1602]: time="2026-04-14T00:03:31.378730590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:03:31.500882 containerd[1602]: time="2026-04-14T00:03:31.496928981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:03:34.441518 containerd[1602]: time="2026-04-14T00:03:34.386050464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bk6dq,Uid:8eacea06-8002-4c81-8c50-616dc1666389,Namespace:kube-system,Attempt:0,} returns sandbox id \"26786d0d319257b1e7e0e46a4a9fc85aa5fa3b6762dfe2bf12825414195b5f0b\"" Apr 14 00:03:34.596673 kubelet[2958]: E0414 00:03:34.586168 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.475s" Apr 14 00:03:37.739823 kubelet[2958]: E0414 00:03:37.737501 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:38.719415 kubelet[2958]: E0414 00:03:38.716398 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.129s" Apr 14 00:03:39.623208 containerd[1602]: time="2026-04-14T00:03:39.610543334Z" level=info msg="CreateContainer within sandbox \"26786d0d319257b1e7e0e46a4a9fc85aa5fa3b6762dfe2bf12825414195b5f0b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 00:03:39.970968 kubelet[2958]: I0414 00:03:39.921995 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z95ds\" (UniqueName: \"kubernetes.io/projected/05aed671-1fae-4de5-9743-445b3f4b08f7-kube-api-access-z95ds\") pod \"tigera-operator-6bf85f8dd-tpkjs\" (UID: \"05aed671-1fae-4de5-9743-445b3f4b08f7\") " pod="tigera-operator/tigera-operator-6bf85f8dd-tpkjs" Apr 14 00:03:39.990485 kubelet[2958]: I0414 00:03:39.982975 2958 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05aed671-1fae-4de5-9743-445b3f4b08f7-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-tpkjs\" (UID: \"05aed671-1fae-4de5-9743-445b3f4b08f7\") " pod="tigera-operator/tigera-operator-6bf85f8dd-tpkjs" Apr 14 00:03:41.147930 containerd[1602]: time="2026-04-14T00:03:41.141017595Z" level=info msg="CreateContainer within sandbox \"26786d0d319257b1e7e0e46a4a9fc85aa5fa3b6762dfe2bf12825414195b5f0b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aa5e28f8ef01594327eef12496704c0e013302cfff67918813095d4eb8782110\"" Apr 14 00:03:42.024419 containerd[1602]: time="2026-04-14T00:03:42.016008294Z" level=info msg="StartContainer for \"aa5e28f8ef01594327eef12496704c0e013302cfff67918813095d4eb8782110\"" Apr 14 00:03:43.408273 containerd[1602]: time="2026-04-14T00:03:43.406868835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-tpkjs,Uid:05aed671-1fae-4de5-9743-445b3f4b08f7,Namespace:tigera-operator,Attempt:0,}" Apr 14 00:03:43.701680 kubelet[2958]: E0414 00:03:43.683976 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.688s" Apr 14 00:03:45.868867 kubelet[2958]: E0414 00:03:45.864854 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.138s" Apr 14 00:03:48.801383 containerd[1602]: time="2026-04-14T00:03:48.793641870Z" level=error msg="get state for aa5e28f8ef01594327eef12496704c0e013302cfff67918813095d4eb8782110" error="context deadline exceeded: unknown" Apr 14 00:03:48.801383 containerd[1602]: time="2026-04-14T00:03:48.798514101Z" level=warning msg="unknown status" status=0 Apr 14 00:03:48.879010 containerd[1602]: time="2026-04-14T00:03:48.877585884Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 00:03:50.571621 containerd[1602]: time="2026-04-14T00:03:50.513673116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:03:50.571621 containerd[1602]: time="2026-04-14T00:03:50.513763401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:03:50.571621 containerd[1602]: time="2026-04-14T00:03:50.513822181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:03:50.571621 containerd[1602]: time="2026-04-14T00:03:50.513990303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:03:52.914362 containerd[1602]: time="2026-04-14T00:03:52.810904341Z" level=info msg="StartContainer for \"aa5e28f8ef01594327eef12496704c0e013302cfff67918813095d4eb8782110\" returns successfully" Apr 14 00:03:54.231285 containerd[1602]: time="2026-04-14T00:03:54.230964013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-tpkjs,Uid:05aed671-1fae-4de5-9743-445b3f4b08f7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f0593e38f941c9a493e173b3b0ecac471f3847e3ff4800fc707fb8f9beaf4079\"" Apr 14 00:03:54.996660 kubelet[2958]: E0414 00:03:54.996020 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.003s" Apr 14 00:03:57.269868 containerd[1602]: time="2026-04-14T00:03:57.269663810Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 14 00:03:57.555514 kubelet[2958]: E0414 00:03:57.537932 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.542s" Apr 14 00:03:59.101073 kubelet[2958]: E0414 00:03:59.097878 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:03:59.125724 kubelet[2958]: E0414 00:03:59.120984 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.581s" Apr 14 00:04:01.222443 kubelet[2958]: E0414 00:04:01.040909 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.841s" Apr 14 00:04:02.503171 kubelet[2958]: E0414 00:04:02.496895 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:03.030260 kubelet[2958]: E0414 00:04:02.981737 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.766s" Apr 14 00:04:06.038628 kubelet[2958]: E0414 00:04:05.984419 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.804s" Apr 14 00:04:07.891918 kubelet[2958]: E0414 00:04:07.833994 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.741s" Apr 14 00:04:09.541851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608520312.mount: Deactivated successfully. Apr 14 00:04:10.891778 kubelet[2958]: E0414 00:04:10.890913 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.805s" Apr 14 00:04:13.722505 kubelet[2958]: E0414 00:04:13.717873 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.803s" Apr 14 00:04:15.391763 kubelet[2958]: E0414 00:04:15.297336 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.469s" Apr 14 00:04:16.984761 kubelet[2958]: E0414 00:04:16.984537 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.464s" Apr 14 00:04:18.274710 kubelet[2958]: E0414 00:04:18.232790 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.23s" Apr 14 00:04:24.162578 update_engine[1588]: I20260414 00:04:24.159605 1588 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 14 00:04:24.166259 update_engine[1588]: I20260414 00:04:24.163407 1588 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 14 00:04:24.178821 update_engine[1588]: I20260414 00:04:24.178467 1588 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 14 00:04:24.239652 update_engine[1588]: I20260414 00:04:24.236024 1588 omaha_request_params.cc:62] Current group set to lts Apr 14 00:04:24.270606 update_engine[1588]: I20260414 00:04:24.270494 1588 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 14 00:04:24.272762 update_engine[1588]: I20260414 00:04:24.272516 1588 update_attempter.cc:643] Scheduling an action processor start. Apr 14 00:04:24.275433 update_engine[1588]: I20260414 00:04:24.275295 1588 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 00:04:24.276748 update_engine[1588]: I20260414 00:04:24.276665 1588 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 14 00:04:24.279753 update_engine[1588]: I20260414 00:04:24.279592 1588 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 00:04:24.280832 update_engine[1588]: I20260414 00:04:24.280798 1588 omaha_request_action.cc:272] Request: Apr 14 00:04:24.280832 update_engine[1588]: Apr 14 00:04:24.280832 update_engine[1588]: Apr 14 00:04:24.280832 update_engine[1588]: Apr 14 00:04:24.280832 update_engine[1588]: Apr 14 00:04:24.280832 update_engine[1588]: Apr 14 00:04:24.280832 update_engine[1588]: Apr 14 00:04:24.280832 update_engine[1588]: Apr 14 00:04:24.280832 update_engine[1588]: Apr 14 00:04:24.284461 update_engine[1588]: I20260414 00:04:24.284247 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:04:24.295667 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 14 00:04:24.397874 update_engine[1588]: I20260414 00:04:24.394806 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:04:24.407516 update_engine[1588]: I20260414 00:04:24.407295 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:04:24.489651 update_engine[1588]: E20260414 00:04:24.488734 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:04:24.491996 update_engine[1588]: I20260414 00:04:24.489777 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 14 00:04:25.329909 kubelet[2958]: E0414 00:04:25.329003 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.207s" Apr 14 00:04:25.617551 kubelet[2958]: E0414 00:04:25.610016 2958 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:04:27.284815 kubelet[2958]: E0414 00:04:27.239038 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.15s" Apr 14 00:04:30.422000 kubelet[2958]: E0414 00:04:30.392997 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.014s" Apr 14 00:04:32.025277 kubelet[2958]: E0414 00:04:31.983876 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.557s" Apr 14 00:04:33.243322 kubelet[2958]: E0414 00:04:33.234679 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.202s" Apr 14 00:04:35.081476 update_engine[1588]: I20260414 00:04:35.077762 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 00:04:35.088790 update_engine[1588]: I20260414 00:04:35.084220 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 00:04:35.088790 update_engine[1588]: I20260414 00:04:35.088001 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 00:04:35.112518 update_engine[1588]: E20260414 00:04:35.109596 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 00:04:35.112518 update_engine[1588]: I20260414 00:04:35.110076 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 14 00:04:35.443903 kubelet[2958]: E0414 00:04:35.428774 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.324s" Apr 14 00:04:37.792960 kubelet[2958]: E0414 00:04:37.773832 2958 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.778s" Apr 14 00:04:38.666630 sudo[1813]: pam_unix(sudo:session): session closed for user root Apr 14 00:04:38.711012 sshd[1807]: pam_unix(sshd:session): session closed for user core Apr 14 00:04:38.825543 systemd[1]: sshd@8-10.0.0.40:22-10.0.0.1:50072.service: Deactivated successfully. Apr 14 00:04:38.957579 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Apr 14 00:04:38.964406 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 00:04:39.134807 systemd-logind[1582]: Removed session 9.