Apr 14 13:17:55.874607 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 13:17:55.874627 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:17:55.874637 kernel: BIOS-provided physical RAM map: Apr 14 13:17:55.874642 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 13:17:55.874646 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 13:17:55.874650 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 13:17:55.874656 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 13:17:55.874660 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 13:17:55.874664 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 13:17:55.874670 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 13:17:55.874675 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 13:17:55.874679 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 13:17:55.874696 kernel: NX (Execute Disable) protection: active Apr 14 13:17:55.874701 kernel: APIC: Static calls initialized Apr 14 13:17:55.874707 kernel: SMBIOS 2.8 present. Apr 14 13:17:55.874721 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 13:17:55.874727 kernel: Hypervisor detected: KVM Apr 14 13:17:55.874732 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 13:17:55.874736 kernel: kvm-clock: using sched offset of 9793679457 cycles Apr 14 13:17:55.874742 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 13:17:55.874747 kernel: tsc: Detected 2793.438 MHz processor Apr 14 13:17:55.874751 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 13:17:55.874757 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 13:17:55.874761 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 13:17:55.874768 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 13:17:55.874773 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 13:17:55.874778 kernel: Using GB pages for direct mapping Apr 14 13:17:55.874783 kernel: ACPI: Early table checksum verification disabled Apr 14 13:17:55.874787 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 13:17:55.874792 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:17:55.874797 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:17:55.874802 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:17:55.874807 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 13:17:55.874813 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:17:55.874818 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:17:55.874823 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:17:55.874827 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:17:55.874832 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 13:17:55.874837 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 13:17:55.874842 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 13:17:55.874849 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 13:17:55.874856 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 13:17:55.874861 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 13:17:55.874866 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 13:17:55.874871 kernel: No NUMA configuration found Apr 14 13:17:55.874876 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 13:17:55.874881 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 13:17:55.874888 kernel: Zone ranges: Apr 14 13:17:55.874893 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 13:17:55.874898 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 13:17:55.874903 kernel: Normal empty Apr 14 13:17:55.874908 kernel: Movable zone start for each node Apr 14 13:17:55.874913 kernel: Early memory node ranges Apr 14 13:17:55.874918 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 13:17:55.874923 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 13:17:55.874928 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 13:17:55.874933 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 13:17:55.874939 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 13:17:55.874951 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 13:17:55.874956 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 13:17:55.874961 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 13:17:55.874967 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 13:17:55.874971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 13:17:55.874976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 13:17:55.874982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 13:17:55.874987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 13:17:55.874993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 13:17:55.874999 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 13:17:55.875004 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 13:17:55.875009 kernel: TSC deadline timer available Apr 14 13:17:55.875014 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 13:17:55.875019 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 13:17:55.875024 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 13:17:55.875029 kernel: kvm-guest: setup PV sched yield Apr 14 13:17:55.875040 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 13:17:55.875048 kernel: Booting paravirtualized kernel on KVM Apr 14 13:17:55.875053 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 13:17:55.875058 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 13:17:55.875064 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 13:17:55.875069 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 13:17:55.875074 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 13:17:55.875079 kernel: kvm-guest: PV spinlocks enabled Apr 14 13:17:55.875084 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 13:17:55.875089 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:17:55.875096 kernel: random: crng init done Apr 14 13:17:55.875101 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 13:17:55.875106 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 13:17:55.875112 kernel: Fallback order for Node 0: 0 Apr 14 13:17:55.875117 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 13:17:55.875122 kernel: Policy zone: DMA32 Apr 14 13:17:55.875127 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 13:17:55.875132 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137900K reserved, 0K cma-reserved) Apr 14 13:17:55.875139 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 13:17:55.875144 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 13:17:55.875149 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 13:17:55.875154 kernel: Dynamic Preempt: voluntary Apr 14 13:17:55.875159 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 13:17:55.875164 kernel: rcu: RCU event tracing is enabled. Apr 14 13:17:55.875170 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 13:17:55.875175 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 13:17:55.875180 kernel: Rude variant of Tasks RCU enabled. Apr 14 13:17:55.875187 kernel: Tracing variant of Tasks RCU enabled. Apr 14 13:17:55.875192 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 13:17:55.875197 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 13:17:55.875203 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 13:17:55.875214 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 13:17:55.875219 kernel: Console: colour VGA+ 80x25 Apr 14 13:17:55.875224 kernel: printk: console [ttyS0] enabled Apr 14 13:17:55.875230 kernel: ACPI: Core revision 20230628 Apr 14 13:17:55.875238 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 13:17:55.875249 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 13:17:55.875257 kernel: x2apic enabled Apr 14 13:17:55.875266 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 13:17:55.875273 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 13:17:55.875313 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 13:17:55.875323 kernel: kvm-guest: setup PV IPIs Apr 14 13:17:55.875329 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 13:17:55.875338 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:17:55.875359 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 13:17:55.875370 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 13:17:55.875376 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 13:17:55.875381 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 13:17:55.876208 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 13:17:55.876235 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 13:17:55.876241 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 13:17:55.876247 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 13:17:55.876306 kernel: RETBleed: Vulnerable Apr 14 13:17:55.876313 kernel: Speculative Store Bypass: Vulnerable Apr 14 13:17:55.876319 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 13:17:55.876334 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 13:17:55.876341 kernel: active return thunk: its_return_thunk Apr 14 13:17:55.876347 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 13:17:55.876353 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 13:17:55.876358 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 13:17:55.876364 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 13:17:55.876372 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 13:17:55.876377 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 13:17:55.876383 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 13:17:55.876400 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 13:17:55.876406 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 13:17:55.876412 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 13:17:55.876417 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 13:17:55.876423 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 13:17:55.876429 kernel: Freeing SMP alternatives memory: 32K Apr 14 13:17:55.876440 kernel: pid_max: default: 32768 minimum: 301 Apr 14 13:17:55.876446 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 13:17:55.876452 kernel: landlock: Up and running. Apr 14 13:17:55.876458 kernel: SELinux: Initializing. Apr 14 13:17:55.876463 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:17:55.876469 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:17:55.876475 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 13:17:55.876488 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:17:55.876494 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:17:55.876502 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:17:55.876507 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 13:17:55.876513 kernel: signal: max sigframe size: 3632 Apr 14 13:17:55.876519 kernel: rcu: Hierarchical SRCU implementation. Apr 14 13:17:55.876525 kernel: rcu: Max phase no-delay instances is 400. Apr 14 13:17:55.876531 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 13:17:55.876539 kernel: smp: Bringing up secondary CPUs ... Apr 14 13:17:55.876548 kernel: smpboot: x86: Booting SMP configuration: Apr 14 13:17:55.876558 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 13:17:55.876570 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 13:17:55.876578 kernel: smpboot: Max logical packages: 1 Apr 14 13:17:55.876584 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 13:17:55.876803 kernel: devtmpfs: initialized Apr 14 13:17:55.876860 kernel: x86/mm: Memory block size: 128MB Apr 14 13:17:55.876866 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 13:17:55.876871 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 13:17:55.876877 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 13:17:55.876883 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 13:17:55.876966 kernel: audit: initializing netlink subsys (disabled) Apr 14 13:17:55.876987 kernel: audit: type=2000 audit(1776172671.990:1): state=initialized audit_enabled=0 res=1 Apr 14 13:17:55.876993 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 13:17:55.876998 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 13:17:55.877004 kernel: cpuidle: using governor menu Apr 14 13:17:55.877010 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 13:17:55.877015 kernel: dca service started, version 1.12.1 Apr 14 13:17:55.877021 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 13:17:55.877027 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 13:17:55.877035 kernel: PCI: Using configuration type 1 for base access Apr 14 13:17:55.877041 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 13:17:55.877046 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 13:17:55.877052 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 13:17:55.877058 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 13:17:55.877063 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 13:17:55.877069 kernel: ACPI: Added _OSI(Module Device) Apr 14 13:17:55.877075 kernel: ACPI: Added _OSI(Processor Device) Apr 14 13:17:55.877080 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 13:17:55.877092 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 13:17:55.877098 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 13:17:55.877103 kernel: ACPI: Interpreter enabled Apr 14 13:17:55.877109 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 13:17:55.877114 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 13:17:55.877120 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 13:17:55.877126 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 13:17:55.877131 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 13:17:55.877137 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 13:17:55.877625 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 13:17:55.877709 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 13:17:55.877790 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 13:17:55.877798 kernel: PCI host bridge to bus 0000:00 Apr 14 13:17:55.877905 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 13:17:55.877963 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 13:17:55.878025 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 13:17:55.878082 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 13:17:55.878138 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 13:17:55.878194 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 13:17:55.878250 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 13:17:55.878534 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 13:17:55.878750 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 13:17:55.878998 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 13:17:55.879131 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 13:17:55.879220 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 13:17:55.879381 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 13:17:55.879513 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 13:17:55.879631 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 13:17:55.879754 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 13:17:55.879864 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 13:17:55.883592 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 13:17:55.883710 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 13:17:55.883836 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 13:17:55.883953 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 13:17:55.884651 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 13:17:55.884756 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 13:17:55.884820 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 13:17:55.884882 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 13:17:55.884998 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 13:17:55.885117 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 13:17:55.885215 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 13:17:55.885402 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 13:17:55.885625 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 13:17:55.885751 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 13:17:55.885904 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 13:17:55.886019 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 13:17:55.886033 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 13:17:55.886043 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 13:17:55.886053 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 13:17:55.886063 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 13:17:55.886081 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 13:17:55.886092 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 13:17:55.886103 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 13:17:55.886112 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 13:17:55.886121 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 13:17:55.886131 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 13:17:55.886141 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 13:17:55.886151 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 13:17:55.886162 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 13:17:55.886175 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 13:17:55.886183 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 13:17:55.886194 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 13:17:55.886203 kernel: iommu: Default domain type: Translated Apr 14 13:17:55.886211 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 13:17:55.886221 kernel: PCI: Using ACPI for IRQ routing Apr 14 13:17:55.886230 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 13:17:55.886239 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 13:17:55.886249 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 13:17:55.886494 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 13:17:55.886605 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 13:17:55.886697 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 13:17:55.886710 kernel: vgaarb: loaded Apr 14 13:17:55.886722 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 13:17:55.886733 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 13:17:55.886744 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 13:17:55.886755 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 13:17:55.886771 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 13:17:55.886782 kernel: pnp: PnP ACPI init Apr 14 13:17:55.886986 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 13:17:55.887006 kernel: pnp: PnP ACPI: found 6 devices Apr 14 13:17:55.887018 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 13:17:55.887029 kernel: NET: Registered PF_INET protocol family Apr 14 13:17:55.887040 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 13:17:55.887052 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 13:17:55.887069 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 13:17:55.887080 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 13:17:55.887092 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 13:17:55.887102 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 13:17:55.887112 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:17:55.887123 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:17:55.887134 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 13:17:55.887144 kernel: NET: Registered PF_XDP protocol family Apr 14 13:17:55.887254 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 13:17:55.887797 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 13:17:55.887864 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 13:17:55.887920 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 13:17:55.887975 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 13:17:55.888029 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 13:17:55.888041 kernel: PCI: CLS 0 bytes, default 64 Apr 14 13:17:55.888051 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 13:17:55.888060 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:17:55.888075 kernel: Initialise system trusted keyrings Apr 14 13:17:55.888085 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 13:17:55.888094 kernel: Key type asymmetric registered Apr 14 13:17:55.888102 kernel: Asymmetric key parser 'x509' registered Apr 14 13:17:55.888112 kernel: hrtimer: interrupt took 12973794 ns Apr 14 13:17:55.888122 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 13:17:55.888132 kernel: io scheduler mq-deadline registered Apr 14 13:17:55.888142 kernel: io scheduler kyber registered Apr 14 13:17:55.888152 kernel: io scheduler bfq registered Apr 14 13:17:55.888165 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 13:17:55.888176 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 13:17:55.888186 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 13:17:55.888196 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 13:17:55.888206 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 13:17:55.888214 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 13:17:55.888224 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 13:17:55.888234 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 13:17:55.888243 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 13:17:55.888450 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 13:17:55.888465 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 13:17:55.888536 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 13:17:55.889982 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T13:17:54 UTC (1776172674) Apr 14 13:17:55.891045 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 13:17:55.891067 kernel: intel_pstate: CPU model not supported Apr 14 13:17:55.891078 kernel: NET: Registered PF_INET6 protocol family Apr 14 13:17:55.891089 kernel: Segment Routing with IPv6 Apr 14 13:17:55.891107 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 13:17:55.891118 kernel: NET: Registered PF_PACKET protocol family Apr 14 13:17:55.891128 kernel: Key type dns_resolver registered Apr 14 13:17:55.891140 kernel: IPI shorthand broadcast: enabled Apr 14 13:17:55.891151 kernel: sched_clock: Marking stable (2584077900, 661191073)->(3618405694, -373136721) Apr 14 13:17:55.891161 kernel: registered taskstats version 1 Apr 14 13:17:55.891171 kernel: Loading compiled-in X.509 certificates Apr 14 13:17:55.891182 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 13:17:55.891191 kernel: Key type .fscrypt registered Apr 14 13:17:55.891205 kernel: Key type fscrypt-provisioning registered Apr 14 13:17:55.891216 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 13:17:55.891227 kernel: ima: Allocated hash algorithm: sha1 Apr 14 13:17:55.891238 kernel: ima: No architecture policies found Apr 14 13:17:55.891248 kernel: clk: Disabling unused clocks Apr 14 13:17:55.891259 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 13:17:55.891269 kernel: Write protecting the kernel read-only data: 36864k Apr 14 13:17:55.891942 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 13:17:55.891952 kernel: Run /init as init process Apr 14 13:17:55.891964 kernel: with arguments: Apr 14 13:17:55.891987 kernel: /init Apr 14 13:17:55.891996 kernel: with environment: Apr 14 13:17:55.892004 kernel: HOME=/ Apr 14 13:17:55.892012 kernel: TERM=linux Apr 14 13:17:55.892025 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:17:55.892037 systemd[1]: Detected virtualization kvm. Apr 14 13:17:55.892047 systemd[1]: Detected architecture x86-64. Apr 14 13:17:55.892060 systemd[1]: Running in initrd. Apr 14 13:17:55.892070 systemd[1]: No hostname configured, using default hostname. Apr 14 13:17:55.892080 systemd[1]: Hostname set to . Apr 14 13:17:55.892090 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:17:55.892099 systemd[1]: Queued start job for default target initrd.target. Apr 14 13:17:55.892109 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:17:55.892119 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:17:55.892130 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 13:17:55.892143 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:17:55.892154 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 13:17:55.892175 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 13:17:55.892190 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 13:17:55.892200 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 13:17:55.892211 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:17:55.892221 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:17:55.892232 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:17:55.892242 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:17:55.892248 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:17:55.892254 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:17:55.892261 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:17:55.892267 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:17:55.892857 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 13:17:55.892875 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 13:17:55.892886 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:17:55.892896 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:17:55.892906 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:17:55.892916 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:17:55.892926 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 13:17:55.892935 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:17:55.892944 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 13:17:55.892960 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 13:17:55.892970 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:17:55.892982 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:17:55.892992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:17:55.893003 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 13:17:55.893014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:17:55.893088 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 13:17:55.893109 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 13:17:55.893120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 13:17:55.893127 systemd-journald[194]: Journal started Apr 14 13:17:55.893144 systemd-journald[194]: Runtime Journal (/run/log/journal/c8ab17b3ef1049ea995dbb89a586e29f) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:17:55.883516 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 13:17:55.901656 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:17:55.918807 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:17:56.188005 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 13:17:56.188050 kernel: Bridge firewalling registered Apr 14 13:17:55.971338 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 13:17:55.973855 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:17:56.188195 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:17:56.215670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:17:56.278926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:17:56.287487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:17:56.288765 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:17:56.289516 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:17:56.320887 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:17:56.325981 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:17:56.334215 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:17:56.349619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:17:56.377074 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 13:17:56.465911 systemd-resolved[225]: Positive Trust Anchors: Apr 14 13:17:56.465932 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:17:56.465957 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:17:56.472686 systemd-resolved[225]: Defaulting to hostname 'linux'. Apr 14 13:17:56.473887 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:17:56.489509 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:17:56.531463 dracut-cmdline[230]: dracut-dracut-053 Apr 14 13:17:56.596579 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:17:56.898465 kernel: SCSI subsystem initialized Apr 14 13:17:56.915250 kernel: Loading iSCSI transport class v2.0-870. Apr 14 13:17:56.979220 kernel: iscsi: registered transport (tcp) Apr 14 13:17:57.088147 kernel: iscsi: registered transport (qla4xxx) Apr 14 13:17:57.088504 kernel: QLogic iSCSI HBA Driver Apr 14 13:17:57.315510 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 13:17:57.411142 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 13:17:57.470611 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 13:17:57.471726 kernel: device-mapper: uevent: version 1.0.3 Apr 14 13:17:57.477053 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 13:17:57.587696 kernel: raid6: avx512x4 gen() 27202 MB/s Apr 14 13:17:57.606592 kernel: raid6: avx512x2 gen() 18465 MB/s Apr 14 13:17:57.647711 kernel: raid6: avx512x1 gen() 14452 MB/s Apr 14 13:17:57.665717 kernel: raid6: avx2x4 gen() 17736 MB/s Apr 14 13:17:57.686894 kernel: raid6: avx2x2 gen() 16404 MB/s Apr 14 13:17:57.705760 kernel: raid6: avx2x1 gen() 7486 MB/s Apr 14 13:17:57.706087 kernel: raid6: using algorithm avx512x4 gen() 27202 MB/s Apr 14 13:17:57.725739 kernel: raid6: .... xor() 5356 MB/s, rmw enabled Apr 14 13:17:57.726612 kernel: raid6: using avx512x2 recovery algorithm Apr 14 13:17:57.766592 kernel: xor: automatically using best checksumming function avx Apr 14 13:17:58.176138 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 13:17:58.260262 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:17:58.284545 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:17:58.327460 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 14 13:17:58.334041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:17:58.350865 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 13:17:58.388227 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Apr 14 13:17:58.630069 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:17:58.652041 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:17:58.893553 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:17:58.908946 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 13:17:58.980662 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 13:17:58.991981 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:17:58.998527 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:17:59.000724 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:17:59.015227 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 13:17:59.060372 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 13:17:59.084557 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 13:17:59.100086 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 13:17:59.100887 kernel: GPT:9289727 != 19775487 Apr 14 13:17:59.100903 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 13:17:59.102709 kernel: GPT:9289727 != 19775487 Apr 14 13:17:59.113136 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 13:17:59.118705 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:17:59.203716 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:17:59.221976 kernel: libata version 3.00 loaded. Apr 14 13:17:59.222005 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 13:17:59.230332 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 13:17:59.230553 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 13:17:59.235081 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:17:59.239583 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 13:17:59.239761 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 13:17:59.237951 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:17:59.242977 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:17:59.379267 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 13:17:59.380551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:17:59.384000 kernel: AES CTR mode by8 optimization enabled Apr 14 13:17:59.380906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:17:59.386158 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:17:59.401823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:17:59.414326 kernel: scsi host0: ahci Apr 14 13:17:59.418441 kernel: scsi host1: ahci Apr 14 13:17:59.422563 kernel: scsi host2: ahci Apr 14 13:17:59.426322 kernel: scsi host3: ahci Apr 14 13:17:59.432153 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 13:17:59.443241 kernel: scsi host4: ahci Apr 14 13:17:59.443804 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Apr 14 13:17:59.443815 kernel: scsi host5: ahci Apr 14 13:17:59.445331 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 13:17:59.451711 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 13:17:59.452742 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 13:17:59.452781 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 13:17:59.452789 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 13:17:59.452796 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 13:17:59.452808 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 13:17:59.452815 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (485) Apr 14 13:17:59.695377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 13:17:59.752763 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 13:17:59.764301 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 13:17:59.764378 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 13:17:59.764387 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 13:17:59.760040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:17:59.787014 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 13:17:59.787042 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 13:17:59.787053 kernel: ata3.00: applying bridge limits Apr 14 13:17:59.787063 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 13:17:59.787084 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 13:17:59.787094 kernel: ata3.00: configured for UDMA/100 Apr 14 13:17:59.787105 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 13:17:59.793346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:17:59.816935 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 13:17:59.865854 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:17:59.884873 disk-uuid[565]: Primary Header is updated. Apr 14 13:17:59.884873 disk-uuid[565]: Secondary Entries is updated. Apr 14 13:17:59.884873 disk-uuid[565]: Secondary Header is updated. Apr 14 13:17:59.893366 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:17:59.902409 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:17:59.905887 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:17:59.957148 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 13:17:59.958640 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 13:17:59.958657 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:17:59.960363 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 13:18:00.910224 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:18:00.911083 disk-uuid[573]: The operation has completed successfully. Apr 14 13:18:00.987405 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 13:18:00.987583 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 13:18:01.076313 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 13:18:01.085861 sh[596]: Success Apr 14 13:18:01.117735 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 13:18:01.288467 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 13:18:01.325748 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 13:18:01.331899 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 13:18:01.417741 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 13:18:01.418009 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:18:01.468673 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 13:18:01.469094 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 13:18:01.469108 kernel: BTRFS info (device dm-0): using free space tree Apr 14 13:18:01.491550 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 13:18:01.497207 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 13:18:01.523925 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 13:18:01.529681 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 13:18:01.553734 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:18:01.554133 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:18:01.554148 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:18:01.561471 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:18:01.583870 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 13:18:01.591444 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:18:01.678853 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 13:18:01.687220 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 13:18:02.381792 ignition[681]: Ignition 2.19.0 Apr 14 13:18:02.381815 ignition[681]: Stage: fetch-offline Apr 14 13:18:02.381905 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:18:02.381917 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:18:02.382171 ignition[681]: parsed url from cmdline: "" Apr 14 13:18:02.382175 ignition[681]: no config URL provided Apr 14 13:18:02.382180 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 13:18:02.382189 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 14 13:18:02.382311 ignition[681]: op(1): [started] loading QEMU firmware config module Apr 14 13:18:02.382318 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 13:18:02.414991 ignition[681]: op(1): [finished] loading QEMU firmware config module Apr 14 13:18:02.504626 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:18:02.543940 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:18:02.699602 ignition[681]: parsing config with SHA512: cde30dffe76b05d1bd173a4240a25792af45c42922697c488eef1cd57188fe26c94c225ddaece30db0546f1d7c276cab20e5fdaa1851adbcd226cc20783fb3a5 Apr 14 13:18:02.707951 systemd-networkd[784]: lo: Link UP Apr 14 13:18:02.708624 systemd-networkd[784]: lo: Gained carrier Apr 14 13:18:02.756722 systemd-networkd[784]: Enumeration completed Apr 14 13:18:02.758247 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:18:02.759492 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:18:02.759496 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:18:02.760765 systemd-networkd[784]: eth0: Link UP Apr 14 13:18:02.760770 systemd-networkd[784]: eth0: Gained carrier Apr 14 13:18:02.760780 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:18:02.761132 systemd[1]: Reached target network.target - Network. Apr 14 13:18:02.809110 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:18:02.850782 unknown[681]: fetched base config from "system" Apr 14 13:18:02.850845 unknown[681]: fetched user config from "qemu" Apr 14 13:18:02.858923 ignition[681]: fetch-offline: fetch-offline passed Apr 14 13:18:02.859075 ignition[681]: Ignition finished successfully Apr 14 13:18:02.863599 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:18:02.880916 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 13:18:02.965347 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 13:18:03.303059 ignition[788]: Ignition 2.19.0 Apr 14 13:18:03.306694 ignition[788]: Stage: kargs Apr 14 13:18:03.313220 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:18:03.356259 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:18:03.389571 ignition[788]: kargs: kargs passed Apr 14 13:18:03.391908 ignition[788]: Ignition finished successfully Apr 14 13:18:03.480046 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 13:18:03.508003 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 13:18:03.777830 ignition[796]: Ignition 2.19.0 Apr 14 13:18:03.777850 ignition[796]: Stage: disks Apr 14 13:18:03.782172 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:18:03.782630 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:18:03.898659 ignition[796]: disks: disks passed Apr 14 13:18:03.898974 ignition[796]: Ignition finished successfully Apr 14 13:18:03.937155 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 13:18:03.945233 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 13:18:03.948548 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 13:18:03.979665 systemd-networkd[784]: eth0: Gained IPv6LL Apr 14 13:18:03.982201 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:18:03.989979 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:18:04.007423 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:18:04.100157 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 13:18:04.530394 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 13:18:04.564881 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 13:18:04.612843 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 13:18:05.354341 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 13:18:05.356923 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 13:18:05.360012 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 13:18:05.376317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:18:05.381796 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 13:18:05.386085 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 13:18:05.386191 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 13:18:05.425499 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Apr 14 13:18:05.386211 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:18:05.438779 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:18:05.438867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:18:05.438880 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:18:05.443923 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 13:18:05.457357 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:18:05.458773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:18:05.508733 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 13:18:05.954875 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 13:18:05.966139 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Apr 14 13:18:05.981089 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 13:18:05.996414 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 13:18:07.436848 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 13:18:07.468062 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 13:18:07.504507 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 13:18:07.579985 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 13:18:07.581808 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:18:07.653765 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 13:18:07.770213 ignition[928]: INFO : Ignition 2.19.0 Apr 14 13:18:07.770213 ignition[928]: INFO : Stage: mount Apr 14 13:18:07.779224 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:18:07.779224 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:18:07.779224 ignition[928]: INFO : mount: mount passed Apr 14 13:18:07.779224 ignition[928]: INFO : Ignition finished successfully Apr 14 13:18:07.792965 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 13:18:07.845163 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 13:18:08.083422 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:18:08.167604 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Apr 14 13:18:08.172622 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:18:08.173332 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:18:08.173352 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:18:08.178563 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:18:08.188375 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:18:08.376126 ignition[958]: INFO : Ignition 2.19.0 Apr 14 13:18:08.376126 ignition[958]: INFO : Stage: files Apr 14 13:18:08.396157 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:18:08.396157 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:18:08.396157 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 14 13:18:08.482076 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 13:18:08.482076 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 13:18:08.501011 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 13:18:08.501011 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 13:18:08.600092 unknown[958]: wrote ssh authorized keys file for user: core Apr 14 13:18:08.663752 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 13:18:08.866228 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:18:08.870962 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 13:18:09.097117 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 13:18:09.834773 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:18:09.840804 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 14 13:18:09.844720 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 13:18:09.844720 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:18:09.844720 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:18:09.844720 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:18:09.844720 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:18:09.844720 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:18:09.844720 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:18:09.844720 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:18:09.870164 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:18:09.870164 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 13:18:09.870164 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 13:18:09.870164 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 13:18:09.870164 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 14 13:18:10.352769 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 14 13:18:14.147906 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 14 13:18:14.147906 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 14 13:18:14.165666 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:18:14.165666 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:18:14.165666 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 14 13:18:14.165666 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 14 13:18:14.165666 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:18:14.185607 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:18:14.185607 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 14 13:18:14.185607 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 13:18:14.485432 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:18:14.584380 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:18:14.589945 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 13:18:14.589945 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 14 13:18:14.589945 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 13:18:14.589945 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:18:14.589945 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:18:14.589945 ignition[958]: INFO : files: files passed Apr 14 13:18:14.589945 ignition[958]: INFO : Ignition finished successfully Apr 14 13:18:14.646244 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 13:18:14.685245 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 13:18:14.705191 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 13:18:14.713014 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 13:18:14.715512 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 13:18:14.812578 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 13:18:14.863171 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:18:14.863171 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:18:14.872001 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:18:14.881936 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:18:14.886693 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 13:18:14.905925 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 13:18:15.385780 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 13:18:15.413892 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 13:18:15.477997 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 13:18:15.485487 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 13:18:15.493207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 13:18:15.515739 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 13:18:15.883170 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:18:15.909902 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 13:18:16.079675 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:18:16.080745 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:18:16.094945 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 13:18:16.100348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 13:18:16.100590 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:18:16.192364 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 13:18:16.199869 systemd[1]: Stopped target basic.target - Basic System. Apr 14 13:18:16.208083 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 13:18:16.216696 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:18:16.223967 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 13:18:16.233943 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 13:18:16.239985 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:18:16.245602 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 13:18:16.246735 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 13:18:16.263952 systemd[1]: Stopped target swap.target - Swaps. Apr 14 13:18:16.264897 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 13:18:16.265145 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:18:16.274841 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:18:16.275398 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:18:16.287771 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 13:18:16.293258 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:18:16.307700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 13:18:16.308714 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 13:18:16.395332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 13:18:16.395703 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:18:16.399112 systemd[1]: Stopped target paths.target - Path Units. Apr 14 13:18:16.408243 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 13:18:16.411429 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:18:16.423963 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 13:18:16.425545 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 13:18:16.434984 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 13:18:16.435142 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:18:16.435747 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 13:18:16.435814 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:18:16.444163 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 13:18:16.446187 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:18:16.448249 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 13:18:16.448416 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 13:18:16.477495 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 13:18:16.478870 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 13:18:16.479022 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:18:16.506355 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 13:18:16.512086 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 13:18:16.520490 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:18:16.575706 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 13:18:16.575968 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:18:16.602118 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 13:18:16.603143 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 13:18:16.654126 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 13:18:16.659216 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 13:18:16.659380 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 13:18:16.703078 ignition[1012]: INFO : Ignition 2.19.0 Apr 14 13:18:16.703078 ignition[1012]: INFO : Stage: umount Apr 14 13:18:16.710967 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:18:16.710967 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:18:16.710967 ignition[1012]: INFO : umount: umount passed Apr 14 13:18:16.710967 ignition[1012]: INFO : Ignition finished successfully Apr 14 13:18:16.715701 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 13:18:16.715853 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 13:18:16.759097 systemd[1]: Stopped target network.target - Network. Apr 14 13:18:16.763086 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 13:18:16.763390 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 13:18:16.764677 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 13:18:16.764732 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 13:18:16.774747 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 13:18:16.776472 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 13:18:16.777765 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 13:18:16.777846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 13:18:16.781897 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 13:18:16.782009 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 13:18:16.796018 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 13:18:16.804133 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 13:18:16.853664 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 14 13:18:16.864948 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 13:18:16.867002 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 13:18:16.880402 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 13:18:16.883571 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 13:18:16.892396 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 13:18:16.892497 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:18:16.949849 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 13:18:16.957704 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 13:18:16.958933 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:18:16.964947 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 13:18:16.965061 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:18:16.969613 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 13:18:16.969710 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 13:18:16.974917 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 13:18:16.975012 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:18:16.981270 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:18:17.011599 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 13:18:17.011776 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:18:17.071405 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 13:18:17.071502 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 13:18:17.079722 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 13:18:17.079861 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:18:17.083758 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 13:18:17.083938 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:18:17.096115 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 13:18:17.096192 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 13:18:17.110110 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:18:17.116113 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:18:17.174813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 13:18:17.177562 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 13:18:17.177693 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:18:17.183250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:18:17.187632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:18:17.206684 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 13:18:17.206981 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 13:18:17.245168 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 13:18:17.246960 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 13:18:17.256165 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 13:18:17.298248 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 13:18:17.439794 systemd[1]: Switching root. Apr 14 13:18:17.568914 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 13:18:17.569849 systemd-journald[194]: Journal stopped Apr 14 13:18:30.807702 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 13:18:30.807775 kernel: SELinux: policy capability open_perms=1 Apr 14 13:18:30.807789 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 13:18:30.807804 kernel: SELinux: policy capability always_check_network=0 Apr 14 13:18:30.807812 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 13:18:30.807820 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 13:18:30.807829 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 13:18:30.807837 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 13:18:30.807845 kernel: audit: type=1403 audit(1776172698.294:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 13:18:30.807857 systemd[1]: Successfully loaded SELinux policy in 218.284ms. Apr 14 13:18:30.807875 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 115.440ms. Apr 14 13:18:30.807884 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:18:30.807896 systemd[1]: Detected virtualization kvm. Apr 14 13:18:30.807904 systemd[1]: Detected architecture x86-64. Apr 14 13:18:30.807913 systemd[1]: Detected first boot. Apr 14 13:18:30.807923 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:18:30.807932 zram_generator::config[1056]: No configuration found. Apr 14 13:18:30.807942 systemd[1]: Populated /etc with preset unit settings. Apr 14 13:18:30.807951 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 13:18:30.807959 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 13:18:30.807968 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 13:18:30.807977 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 13:18:30.807985 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 13:18:30.807995 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 13:18:30.808003 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 13:18:30.808012 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 13:18:30.808020 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 13:18:30.808028 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 13:18:30.808036 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 13:18:30.808044 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:18:30.808053 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:18:30.808061 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 13:18:30.808070 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 13:18:30.808079 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 13:18:30.808087 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:18:30.808096 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 13:18:30.808104 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:18:30.808112 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 13:18:30.808119 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 13:18:30.808127 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 13:18:30.808136 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 13:18:30.808145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:18:30.808154 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:18:30.808162 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:18:30.808169 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:18:30.808177 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 13:18:30.808186 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 13:18:30.808194 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:18:30.808203 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:18:30.808212 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:18:30.808221 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 13:18:30.808231 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 13:18:30.808240 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 13:18:30.808248 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 13:18:30.808256 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:18:30.808264 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 13:18:30.808273 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 13:18:30.814621 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 13:18:30.814719 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 13:18:30.814729 systemd[1]: Reached target machines.target - Containers. Apr 14 13:18:30.814737 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 13:18:30.814745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:18:30.814754 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:18:30.859868 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 13:18:30.860026 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:18:30.860041 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:18:30.860062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:18:30.860077 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 13:18:30.860090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:18:30.860107 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 13:18:30.860120 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 13:18:30.860134 kernel: loop: module loaded Apr 14 13:18:30.860148 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 13:18:30.860162 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 13:18:30.860177 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 13:18:30.860191 kernel: ACPI: bus type drm_connector registered Apr 14 13:18:30.860207 kernel: fuse: init (API version 7.39) Apr 14 13:18:30.860220 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:18:30.860233 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:18:30.860248 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 13:18:30.860261 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 13:18:30.860373 systemd-journald[1140]: Collecting audit messages is disabled. Apr 14 13:18:30.860406 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:18:30.860421 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 13:18:30.860435 systemd[1]: Stopped verity-setup.service. Apr 14 13:18:30.860448 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:18:30.860462 systemd-journald[1140]: Journal started Apr 14 13:18:30.860489 systemd-journald[1140]: Runtime Journal (/run/log/journal/c8ab17b3ef1049ea995dbb89a586e29f) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:18:27.790475 systemd[1]: Queued start job for default target multi-user.target. Apr 14 13:18:28.112616 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 13:18:28.156131 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 13:18:28.187706 systemd[1]: systemd-journald.service: Consumed 1.697s CPU time. Apr 14 13:18:30.870541 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:18:30.884461 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 13:18:30.890691 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 13:18:30.893834 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 13:18:30.899575 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 13:18:30.903060 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 13:18:30.906434 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 13:18:30.947624 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 13:18:30.955569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:18:30.962969 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 13:18:30.964782 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 13:18:30.968583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:18:30.968824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:18:30.973558 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:18:30.973823 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:18:30.981767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:18:30.983104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:18:30.990644 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 13:18:30.990913 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 13:18:30.994721 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:18:30.994868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:18:31.000433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:18:31.002995 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 13:18:31.007593 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 13:18:31.069235 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 13:18:31.085794 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 13:18:31.107814 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 13:18:31.114937 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 13:18:31.114992 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:18:31.153524 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 13:18:31.181375 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 13:18:31.254154 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 13:18:31.256471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:18:31.268651 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 13:18:31.280737 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 13:18:31.283424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:18:31.289431 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 13:18:31.295713 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:18:31.339200 systemd-journald[1140]: Time spent on flushing to /var/log/journal/c8ab17b3ef1049ea995dbb89a586e29f is 348.335ms for 950 entries. Apr 14 13:18:31.339200 systemd-journald[1140]: System Journal (/var/log/journal/c8ab17b3ef1049ea995dbb89a586e29f) is 8.0M, max 195.6M, 187.6M free. Apr 14 13:18:31.777565 systemd-journald[1140]: Received client request to flush runtime journal. Apr 14 13:18:31.675725 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:18:31.701576 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 13:18:31.800961 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 13:18:31.922935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:18:32.175712 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 13:18:32.181673 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 13:18:32.186186 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 13:18:32.189855 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 13:18:32.205126 kernel: loop0: detected capacity change from 0 to 140768 Apr 14 13:18:32.206487 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 13:18:32.216650 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 13:18:32.237681 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 13:18:32.250544 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 13:18:32.253838 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:18:32.291394 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 13:18:32.358650 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 13:18:32.364002 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 13:18:32.382124 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 14 13:18:32.390893 kernel: loop1: detected capacity change from 0 to 219192 Apr 14 13:18:32.425058 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 13:18:32.444157 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:18:32.466345 kernel: loop2: detected capacity change from 0 to 142488 Apr 14 13:18:32.707523 kernel: loop3: detected capacity change from 0 to 140768 Apr 14 13:18:32.731454 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 14 13:18:32.732394 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 14 13:18:32.743835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:18:32.758642 kernel: loop4: detected capacity change from 0 to 219192 Apr 14 13:18:32.778311 kernel: loop5: detected capacity change from 0 to 142488 Apr 14 13:18:32.805231 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 13:18:32.807461 (sd-merge)[1193]: Merged extensions into '/usr'. Apr 14 13:18:32.832018 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 13:18:32.832039 systemd[1]: Reloading... Apr 14 13:18:33.207898 zram_generator::config[1220]: No configuration found. Apr 14 13:18:33.771359 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 13:18:34.400163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:18:34.708243 systemd[1]: Reloading finished in 1873 ms. Apr 14 13:18:34.866852 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 13:18:34.872660 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 13:18:34.911027 systemd[1]: Starting ensure-sysext.service... Apr 14 13:18:34.958820 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:18:34.984343 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Apr 14 13:18:34.984390 systemd[1]: Reloading... Apr 14 13:18:35.215962 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 13:18:35.216238 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 13:18:35.254187 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 13:18:35.254402 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 14 13:18:35.254443 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 14 13:18:35.264501 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:18:35.264524 systemd-tmpfiles[1258]: Skipping /boot Apr 14 13:18:35.321202 zram_generator::config[1285]: No configuration found. Apr 14 13:18:35.376030 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:18:35.377042 systemd-tmpfiles[1258]: Skipping /boot Apr 14 13:18:36.707456 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:18:36.992071 systemd[1]: Reloading finished in 2007 ms. Apr 14 13:18:37.074697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:18:37.109439 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:18:37.173795 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 13:18:37.178999 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 13:18:37.207823 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:18:37.283108 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 13:18:38.027362 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 13:18:38.069265 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 13:18:38.102037 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 13:18:38.200820 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 13:18:38.209148 augenrules[1344]: No rules Apr 14 13:18:38.215368 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:18:38.251845 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 13:18:38.273230 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:18:38.274987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:18:38.305029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:18:38.335447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:18:38.341165 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:18:38.347996 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:18:38.366435 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:18:38.392681 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 13:18:38.400961 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 13:18:38.401474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:18:38.415798 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 13:18:38.449718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:18:38.449958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:18:38.456777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:18:38.456968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:18:38.460116 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:18:38.460337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:18:38.468778 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 13:18:38.487001 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:18:38.487164 systemd-udevd[1362]: Using default interface naming scheme 'v255'. Apr 14 13:18:38.487177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:18:38.532167 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:18:38.602644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:18:38.613026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:18:38.613195 systemd-resolved[1333]: Positive Trust Anchors: Apr 14 13:18:38.616547 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:18:38.616738 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:18:38.632687 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:18:38.633991 systemd-resolved[1333]: Defaulting to hostname 'linux'. Apr 14 13:18:38.635605 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:18:38.638723 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 13:18:38.638753 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:18:38.639370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:18:38.642760 systemd[1]: Finished ensure-sysext.service. Apr 14 13:18:38.646303 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:18:38.664099 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:18:38.664364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:18:38.668050 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:18:38.668243 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:18:38.673318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:18:38.674990 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:18:38.677867 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:18:38.682753 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:18:38.835951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:18:38.857643 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:18:38.859789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:18:38.859955 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:18:38.879969 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 13:18:38.891854 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 13:18:39.229638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1390) Apr 14 13:18:39.297985 systemd-networkd[1397]: lo: Link UP Apr 14 13:18:39.298011 systemd-networkd[1397]: lo: Gained carrier Apr 14 13:18:39.317207 systemd-networkd[1397]: Enumeration completed Apr 14 13:18:39.319066 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:18:39.407925 systemd[1]: Reached target network.target - Network. Apr 14 13:18:39.415465 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:18:39.415488 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:18:39.423142 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 13:18:39.426860 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:18:39.426898 systemd-networkd[1397]: eth0: Link UP Apr 14 13:18:39.426902 systemd-networkd[1397]: eth0: Gained carrier Apr 14 13:18:39.426912 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:18:39.455784 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:18:39.478984 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 13:18:39.959651 systemd-resolved[1333]: Clock change detected. Flushing caches. Apr 14 13:18:39.959731 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 13:18:39.959802 systemd-timesyncd[1398]: Initial clock synchronization to Tue 2026-04-14 13:18:39.959299 UTC. Apr 14 13:18:39.960127 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 13:18:40.172249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:18:40.174475 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 14 13:18:40.194514 kernel: ACPI: button: Power Button [PWRF] Apr 14 13:18:40.255647 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 13:18:40.327951 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 14 13:18:40.383782 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 13:18:40.392319 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 13:18:40.516204 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 13:18:40.384605 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 13:18:41.050482 systemd-networkd[1397]: eth0: Gained IPv6LL Apr 14 13:18:41.120978 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 13:18:41.150791 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 13:18:41.177734 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 13:18:41.279682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:18:42.244337 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 13:18:42.263844 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 13:18:42.326318 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:18:42.506055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:18:42.513371 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 13:18:42.564189 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:18:42.581378 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:18:42.645791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 13:18:42.659671 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 13:18:42.670208 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 13:18:42.672244 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 13:18:42.688444 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 13:18:42.745931 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 13:18:42.746412 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:18:42.760019 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:18:42.883094 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 13:18:43.041202 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 13:18:43.216534 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 13:18:43.252236 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 13:18:43.276210 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 13:18:43.282873 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:18:43.314992 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:18:43.325318 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:18:43.325364 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:18:43.382690 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:18:43.387328 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 13:18:43.437439 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 13:18:43.514296 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 13:18:43.582721 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 13:18:43.896887 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 13:18:43.900838 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 13:18:43.906081 jq[1431]: false Apr 14 13:18:43.918642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:18:43.970813 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 13:18:43.979782 dbus-daemon[1430]: [system] SELinux support is enabled Apr 14 13:18:43.987755 extend-filesystems[1432]: Found loop3 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found loop4 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found loop5 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found sr0 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found vda Apr 14 13:18:43.987755 extend-filesystems[1432]: Found vda1 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found vda2 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found vda3 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found usr Apr 14 13:18:43.987755 extend-filesystems[1432]: Found vda4 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found vda6 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found vda7 Apr 14 13:18:43.987755 extend-filesystems[1432]: Found vda9 Apr 14 13:18:43.987755 extend-filesystems[1432]: Checking size of /dev/vda9 Apr 14 13:18:43.983111 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 13:18:44.081194 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 13:18:44.109349 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 13:18:44.111661 extend-filesystems[1432]: Resized partition /dev/vda9 Apr 14 13:18:44.126468 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 13:18:44.128289 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Apr 14 13:18:44.138773 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 13:18:44.182859 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 13:18:44.263905 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 13:18:44.272261 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1390) Apr 14 13:18:44.264825 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 13:18:44.285848 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 13:18:44.306562 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 13:18:44.311536 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 13:18:44.317097 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 13:18:44.327806 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 13:18:44.334268 jq[1457]: true Apr 14 13:18:44.335518 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 13:18:44.335518 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 13:18:44.335518 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 13:18:44.366627 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Apr 14 13:18:44.380231 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 13:18:44.464065 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 13:18:44.498131 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 13:18:44.498944 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 13:18:44.514689 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 13:18:44.514850 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 13:18:44.528763 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 13:18:44.547110 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 13:18:44.552010 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 13:18:44.608151 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 13:18:44.635169 update_engine[1456]: I20260414 13:18:44.620909 1456 main.cc:92] Flatcar Update Engine starting Apr 14 13:18:44.638620 update_engine[1456]: I20260414 13:18:44.638499 1456 update_check_scheduler.cc:74] Next update check in 6m8s Apr 14 13:18:44.649332 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 13:18:44.653463 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 13:18:44.684044 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 13:18:44.688393 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 13:18:44.778413 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 14 13:18:44.733010 systemd-logind[1454]: New seat seat0. Apr 14 13:18:44.736693 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 13:18:44.831003 tar[1466]: linux-amd64/LICENSE Apr 14 13:18:44.850857 tar[1466]: linux-amd64/helm Apr 14 13:18:44.852738 systemd[1]: Started update-engine.service - Update Engine. Apr 14 13:18:44.935429 jq[1467]: true Apr 14 13:18:44.939179 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 13:18:44.942462 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 13:18:44.944056 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 13:18:44.947137 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 13:18:44.947298 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 13:18:45.043714 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 13:18:45.929704 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 13:18:45.932656 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 13:18:46.022684 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Apr 14 13:18:46.026508 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 13:18:46.134784 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 13:18:46.137405 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 13:18:46.454147 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 13:18:46.533734 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 13:18:46.533971 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 13:18:46.661436 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 13:18:46.924369 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 13:18:46.958104 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 13:18:47.078469 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 13:18:47.088511 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 13:18:47.544389 containerd[1468]: time="2026-04-14T13:18:47.543234660Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 13:18:47.867697 containerd[1468]: time="2026-04-14T13:18:47.864492369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:18:47.877158 containerd[1468]: time="2026-04-14T13:18:47.875975043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:18:47.877158 containerd[1468]: time="2026-04-14T13:18:47.876115569Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 13:18:47.877158 containerd[1468]: time="2026-04-14T13:18:47.876189732Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 13:18:47.878777 containerd[1468]: time="2026-04-14T13:18:47.878722151Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 13:18:47.878836 containerd[1468]: time="2026-04-14T13:18:47.878805237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 13:18:47.883919 containerd[1468]: time="2026-04-14T13:18:47.883616868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:18:47.883919 containerd[1468]: time="2026-04-14T13:18:47.883678136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:18:47.883919 containerd[1468]: time="2026-04-14T13:18:47.884156877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:18:47.883919 containerd[1468]: time="2026-04-14T13:18:47.884184154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 13:18:47.883919 containerd[1468]: time="2026-04-14T13:18:47.884268634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:18:47.883919 containerd[1468]: time="2026-04-14T13:18:47.884282057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 13:18:47.884951 containerd[1468]: time="2026-04-14T13:18:47.884420404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:18:47.884951 containerd[1468]: time="2026-04-14T13:18:47.884878088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:18:47.885186 containerd[1468]: time="2026-04-14T13:18:47.885156581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:18:47.885272 containerd[1468]: time="2026-04-14T13:18:47.885189402Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 13:18:47.897973 containerd[1468]: time="2026-04-14T13:18:47.895495596Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 13:18:47.902458 containerd[1468]: time="2026-04-14T13:18:47.902263042Z" level=info msg="metadata content store policy set" policy=shared Apr 14 13:18:47.925523 containerd[1468]: time="2026-04-14T13:18:47.924929459Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 13:18:47.927529 containerd[1468]: time="2026-04-14T13:18:47.927182349Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 13:18:47.927529 containerd[1468]: time="2026-04-14T13:18:47.927343608Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 13:18:47.928468 containerd[1468]: time="2026-04-14T13:18:47.927568974Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 13:18:47.928468 containerd[1468]: time="2026-04-14T13:18:47.927648012Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 13:18:47.928468 containerd[1468]: time="2026-04-14T13:18:47.928186712Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 13:18:47.934297 containerd[1468]: time="2026-04-14T13:18:47.932546997Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 13:18:47.947450 containerd[1468]: time="2026-04-14T13:18:47.944766423Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948316814Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948457127Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948478062Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948509318Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948537625Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948569282Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948663276Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948682153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948696360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948733149Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948848065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948883023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948897517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.949070 containerd[1468]: time="2026-04-14T13:18:47.948912682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.948927266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.948980918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.948999002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949032229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949059520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949079992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949095574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949109866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949150310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949173904Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949241365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949257033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.949269646Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 13:18:47.964923 containerd[1468]: time="2026-04-14T13:18:47.953041140Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 13:18:48.099041 containerd[1468]: time="2026-04-14T13:18:47.954270461Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 13:18:48.099041 containerd[1468]: time="2026-04-14T13:18:47.954421154Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 13:18:48.099041 containerd[1468]: time="2026-04-14T13:18:47.954453972Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 13:18:48.099041 containerd[1468]: time="2026-04-14T13:18:47.954467500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 13:18:48.099041 containerd[1468]: time="2026-04-14T13:18:47.954705932Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 13:18:48.099041 containerd[1468]: time="2026-04-14T13:18:47.954742446Z" level=info msg="NRI interface is disabled by configuration." Apr 14 13:18:48.099041 containerd[1468]: time="2026-04-14T13:18:47.954755866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 13:18:48.102538 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:47.964646346Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:47.964881333Z" level=info msg="Connect containerd service" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:47.965051567Z" level=info msg="using legacy CRI server" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:47.965064194Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:47.987931074Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.072763089Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.096298992Z" level=info msg="Start subscribing containerd event" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.097434458Z" level=info msg="Start recovering state" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.097722243Z" level=info msg="Start event monitor" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.099103645Z" level=info msg="Start snapshots syncer" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.099357027Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.099444016Z" level=info msg="Start cni network conf syncer for default" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.099462645Z" level=info msg="Start streaming server" Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.100193511Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 13:18:48.103076 containerd[1468]: time="2026-04-14T13:18:48.101801380Z" level=info msg="containerd successfully booted in 0.567982s" Apr 14 13:18:49.533524 tar[1466]: linux-amd64/README.md Apr 14 13:18:49.910097 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 13:18:50.581528 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 13:18:50.699105 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:43474.service - OpenSSH per-connection server daemon (10.0.0.1:43474). Apr 14 13:18:51.109147 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 43474 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:18:51.117614 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:18:51.725984 systemd-logind[1454]: New session 1 of user core. Apr 14 13:18:51.726894 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 13:18:51.755654 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 13:18:51.929738 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 13:18:51.988185 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 13:18:52.176969 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 13:18:52.954027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:18:52.970899 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:18:52.977390 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 13:18:53.238317 systemd[1541]: Queued start job for default target default.target. Apr 14 13:18:53.280126 systemd[1541]: Created slice app.slice - User Application Slice. Apr 14 13:18:53.280176 systemd[1541]: Reached target paths.target - Paths. Apr 14 13:18:53.280192 systemd[1541]: Reached target timers.target - Timers. Apr 14 13:18:53.332466 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 13:18:53.538897 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 13:18:53.543244 systemd[1541]: Reached target sockets.target - Sockets. Apr 14 13:18:53.543259 systemd[1541]: Reached target basic.target - Basic System. Apr 14 13:18:53.544433 systemd[1541]: Reached target default.target - Main User Target. Apr 14 13:18:53.544519 systemd[1541]: Startup finished in 1.237s. Apr 14 13:18:53.544807 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 13:18:53.557707 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 13:18:53.561443 systemd[1]: Startup finished in 2.880s (kernel) + 23.035s (initrd) + 34.955s (userspace) = 1min 870ms. Apr 14 13:18:53.919644 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:43484.service - OpenSSH per-connection server daemon (10.0.0.1:43484). Apr 14 13:18:54.053667 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 43484 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:18:54.132664 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:18:54.147754 systemd-logind[1454]: New session 2 of user core. Apr 14 13:18:54.499560 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 13:18:54.768275 sshd[1568]: pam_unix(sshd:session): session closed for user core Apr 14 13:18:54.806677 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:43484.service: Deactivated successfully. Apr 14 13:18:54.811167 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 13:18:54.812015 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Apr 14 13:18:54.821619 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:43486.service - OpenSSH per-connection server daemon (10.0.0.1:43486). Apr 14 13:18:54.825674 systemd-logind[1454]: Removed session 2. Apr 14 13:18:54.959273 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 43486 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:18:54.963194 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:18:55.012500 systemd-logind[1454]: New session 3 of user core. Apr 14 13:18:55.041937 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 13:18:55.235641 sshd[1575]: pam_unix(sshd:session): session closed for user core Apr 14 13:18:55.267983 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:43486.service: Deactivated successfully. Apr 14 13:18:55.284603 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 13:18:55.350760 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Apr 14 13:18:55.362152 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:43500.service - OpenSSH per-connection server daemon (10.0.0.1:43500). Apr 14 13:18:55.363451 systemd-logind[1454]: Removed session 3. Apr 14 13:18:55.431413 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 43500 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:18:55.435951 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:18:55.476401 systemd-logind[1454]: New session 4 of user core. Apr 14 13:18:55.512749 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 13:18:55.650751 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 14 13:18:55.660142 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:43500.service: Deactivated successfully. Apr 14 13:18:55.661517 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 13:18:55.663673 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Apr 14 13:18:55.683211 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:43512.service - OpenSSH per-connection server daemon (10.0.0.1:43512). Apr 14 13:18:55.686681 systemd-logind[1454]: Removed session 4. Apr 14 13:18:56.150065 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 43512 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:18:56.152562 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:18:56.153200 kubelet[1556]: E0414 13:18:56.153113 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:18:56.166468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:18:56.170551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:18:56.187275 systemd[1]: kubelet.service: Consumed 6.688s CPU time. Apr 14 13:18:56.310325 systemd-logind[1454]: New session 5 of user core. Apr 14 13:18:56.332146 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 13:18:56.614965 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 13:18:56.615338 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:18:56.640546 sudo[1595]: pam_unix(sudo:session): session closed for user root Apr 14 13:18:56.659158 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 14 13:18:56.827616 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:43512.service: Deactivated successfully. Apr 14 13:18:56.839184 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 13:18:56.840405 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Apr 14 13:18:56.862298 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:43520.service - OpenSSH per-connection server daemon (10.0.0.1:43520). Apr 14 13:18:56.886926 systemd-logind[1454]: Removed session 5. Apr 14 13:18:56.971163 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 43520 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:18:56.977200 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:18:57.109473 systemd-logind[1454]: New session 6 of user core. Apr 14 13:18:57.127188 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 13:18:57.612726 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 13:18:57.613837 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:18:57.646187 sudo[1604]: pam_unix(sudo:session): session closed for user root Apr 14 13:18:57.828318 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 13:18:57.834405 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:18:58.016484 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 13:18:58.055138 auditctl[1607]: No rules Apr 14 13:18:58.056155 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 13:18:58.056337 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 13:18:58.100651 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:18:58.469978 augenrules[1625]: No rules Apr 14 13:18:58.506915 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:18:58.510821 sudo[1603]: pam_unix(sudo:session): session closed for user root Apr 14 13:18:58.528785 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 14 13:18:58.700229 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:43520.service: Deactivated successfully. Apr 14 13:18:58.708179 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 13:18:58.712624 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Apr 14 13:18:58.732242 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:43524.service - OpenSSH per-connection server daemon (10.0.0.1:43524). Apr 14 13:18:58.735675 systemd-logind[1454]: Removed session 6. Apr 14 13:18:58.973654 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 43524 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:18:59.010196 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:18:59.185529 systemd-logind[1454]: New session 7 of user core. Apr 14 13:18:59.250024 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 13:18:59.553555 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 13:18:59.563797 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:19:04.852278 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 13:19:05.353287 (dockerd)[1656]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 13:19:06.245714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 13:19:06.354816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:19:09.907088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:19:10.033069 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:19:10.145615 dockerd[1656]: time="2026-04-14T13:19:10.138564783Z" level=info msg="Starting up" Apr 14 13:19:10.648495 kubelet[1674]: E0414 13:19:10.647465 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:19:10.709718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:19:10.709892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:19:10.710496 systemd[1]: kubelet.service: Consumed 2.490s CPU time. Apr 14 13:19:10.961320 systemd[1]: var-lib-docker-metacopy\x2dcheck3250197684-merged.mount: Deactivated successfully. Apr 14 13:19:11.427062 dockerd[1656]: time="2026-04-14T13:19:11.386856520Z" level=info msg="Loading containers: start." Apr 14 13:19:13.989118 kernel: clocksource: timekeeping watchdog on CPU1: kvm-clock wd-wd read-back delay of 192650ns Apr 14 13:19:14.044040 kernel: clocksource: wd-tsc-wd read-back delay of 190469ns, clock-skew test skipped! Apr 14 13:19:15.151424 kernel: Initializing XFRM netlink socket Apr 14 13:19:16.260064 systemd-networkd[1397]: docker0: Link UP Apr 14 13:19:16.587666 dockerd[1656]: time="2026-04-14T13:19:16.586288087Z" level=info msg="Loading containers: done." Apr 14 13:19:17.542188 dockerd[1656]: time="2026-04-14T13:19:17.541870614Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 13:19:17.543179 dockerd[1656]: time="2026-04-14T13:19:17.543107089Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 13:19:17.543486 dockerd[1656]: time="2026-04-14T13:19:17.543436308Z" level=info msg="Daemon has completed initialization" Apr 14 13:19:17.543869 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4203438130-merged.mount: Deactivated successfully. Apr 14 13:19:18.256003 dockerd[1656]: time="2026-04-14T13:19:18.255336193Z" level=info msg="API listen on /run/docker.sock" Apr 14 13:19:18.261821 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 13:19:20.810972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 14 13:19:20.854098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:19:23.807319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:19:23.832061 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:19:24.539865 containerd[1468]: time="2026-04-14T13:19:24.537961579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 14 13:19:25.065874 kubelet[1827]: E0414 13:19:25.063767 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:19:25.071262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:19:25.071508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:19:25.088415 systemd[1]: kubelet.service: Consumed 2.494s CPU time. Apr 14 13:19:27.040470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2282230964.mount: Deactivated successfully. Apr 14 13:19:29.785168 update_engine[1456]: I20260414 13:19:29.778748 1456 update_attempter.cc:509] Updating boot flags... Apr 14 13:19:29.975332 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1850) Apr 14 13:19:30.218493 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1850) Apr 14 13:19:30.445397 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1850) Apr 14 13:19:35.288009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 14 13:19:35.341001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:19:37.257423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:19:37.379327 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:19:37.840788 kubelet[1918]: E0414 13:19:37.840438 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:19:37.844996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:19:37.845178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:19:37.856985 systemd[1]: kubelet.service: Consumed 1.175s CPU time. Apr 14 13:19:39.951214 containerd[1468]: time="2026-04-14T13:19:39.949340638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:39.965189 containerd[1468]: time="2026-04-14T13:19:39.958648611Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=26947180" Apr 14 13:19:40.016157 containerd[1468]: time="2026-04-14T13:19:40.015362009Z" level=info msg="ImageCreate event name:\"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:40.171295 containerd[1468]: time="2026-04-14T13:19:40.169810059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:40.343779 containerd[1468]: time="2026-04-14T13:19:40.341338163Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"26944341\" in 15.770981042s" Apr 14 13:19:40.343779 containerd[1468]: time="2026-04-14T13:19:40.341454990Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\"" Apr 14 13:19:40.525634 containerd[1468]: time="2026-04-14T13:19:40.518803632Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 14 13:19:45.944197 containerd[1468]: time="2026-04-14T13:19:45.943043122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:46.028182 containerd[1468]: time="2026-04-14T13:19:45.961155528Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=21165744" Apr 14 13:19:46.120731 containerd[1468]: time="2026-04-14T13:19:46.120042243Z" level=info msg="ImageCreate event name:\"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:46.276316 containerd[1468]: time="2026-04-14T13:19:46.271420841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:46.287611 containerd[1468]: time="2026-04-14T13:19:46.287052128Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"22695997\" in 5.768079947s" Apr 14 13:19:46.287611 containerd[1468]: time="2026-04-14T13:19:46.287191093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\"" Apr 14 13:19:46.336317 containerd[1468]: time="2026-04-14T13:19:46.332493434Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 14 13:19:48.017316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 14 13:19:48.059435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:19:50.378439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:19:50.439986 (kubelet)[1944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:19:51.410309 kubelet[1944]: E0414 13:19:51.409695 1944 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:19:51.420908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:19:51.421184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:19:51.436804 systemd[1]: kubelet.service: Consumed 1.574s CPU time. Apr 14 13:19:53.076348 containerd[1468]: time="2026-04-14T13:19:53.075142304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:53.151716 containerd[1468]: time="2026-04-14T13:19:53.082771374Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=15729779" Apr 14 13:19:53.170025 containerd[1468]: time="2026-04-14T13:19:53.169455072Z" level=info msg="ImageCreate event name:\"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:53.270947 containerd[1468]: time="2026-04-14T13:19:53.270399661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:53.272225 containerd[1468]: time="2026-04-14T13:19:53.272013390Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"17260050\" in 6.936300288s" Apr 14 13:19:53.272225 containerd[1468]: time="2026-04-14T13:19:53.272061485Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\"" Apr 14 13:19:53.359817 containerd[1468]: time="2026-04-14T13:19:53.357612611Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 14 13:19:57.531337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852891081.mount: Deactivated successfully. Apr 14 13:19:58.737993 containerd[1468]: time="2026-04-14T13:19:58.737121709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:58.744432 containerd[1468]: time="2026-04-14T13:19:58.739174294Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861668" Apr 14 13:19:58.744915 containerd[1468]: time="2026-04-14T13:19:58.744808318Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:58.816622 containerd[1468]: time="2026-04-14T13:19:58.815890400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:58.818251 containerd[1468]: time="2026-04-14T13:19:58.817184087Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 5.458347031s" Apr 14 13:19:58.818251 containerd[1468]: time="2026-04-14T13:19:58.817216577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 14 13:19:58.840314 containerd[1468]: time="2026-04-14T13:19:58.838856109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 14 13:19:59.974372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976697412.mount: Deactivated successfully. Apr 14 13:20:01.573059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 14 13:20:01.630946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:20:02.052842 (kubelet)[2021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:20:02.053249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:20:02.347017 kubelet[2021]: E0414 13:20:02.340427 2021 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:20:02.353785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:20:02.353946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:20:02.590437 containerd[1468]: time="2026-04-14T13:20:02.583508260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:20:02.600504 containerd[1468]: time="2026-04-14T13:20:02.596512437Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 14 13:20:02.600875 containerd[1468]: time="2026-04-14T13:20:02.600756028Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:20:02.629034 containerd[1468]: time="2026-04-14T13:20:02.628532111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:20:02.643903 containerd[1468]: time="2026-04-14T13:20:02.642465541Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.803282637s" Apr 14 13:20:02.643903 containerd[1468]: time="2026-04-14T13:20:02.642654091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 14 13:20:02.646468 containerd[1468]: time="2026-04-14T13:20:02.646438145Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 14 13:20:03.873303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004103209.mount: Deactivated successfully. Apr 14 13:20:03.908113 containerd[1468]: time="2026-04-14T13:20:03.907374158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:20:03.910485 containerd[1468]: time="2026-04-14T13:20:03.908804266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 14 13:20:03.910485 containerd[1468]: time="2026-04-14T13:20:03.910249577Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:20:03.921438 containerd[1468]: time="2026-04-14T13:20:03.920843298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:20:03.921438 containerd[1468]: time="2026-04-14T13:20:03.921163372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.274531004s" Apr 14 13:20:03.921438 containerd[1468]: time="2026-04-14T13:20:03.921224082Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 14 13:20:03.931855 containerd[1468]: time="2026-04-14T13:20:03.929362237Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 14 13:20:06.461462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537962236.mount: Deactivated successfully. Apr 14 13:20:12.565372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 14 13:20:12.597175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:20:15.975152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:20:16.037872 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:20:17.465723 kubelet[2091]: E0414 13:20:17.462839 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:20:17.477396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:20:17.477714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:20:17.488715 systemd[1]: kubelet.service: Consumed 2.580s CPU time. Apr 14 13:20:20.356803 containerd[1468]: time="2026-04-14T13:20:20.356203236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:20:20.356803 containerd[1468]: time="2026-04-14T13:20:20.356959795Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22873707" Apr 14 13:20:20.366987 containerd[1468]: time="2026-04-14T13:20:20.366552252Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:20:20.388392 containerd[1468]: time="2026-04-14T13:20:20.384398491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:20:20.498956 containerd[1468]: time="2026-04-14T13:20:20.498416970Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 16.56618889s" Apr 14 13:20:20.498956 containerd[1468]: time="2026-04-14T13:20:20.498736300Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 14 13:20:27.507457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 14 13:20:27.597923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:20:32.058486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:20:32.291426 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:20:33.368273 kubelet[2146]: E0414 13:20:33.367897 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:20:33.544051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:20:33.648127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:20:33.663085 systemd[1]: kubelet.service: Consumed 3.107s CPU time, 1.2M memory peak, 0B memory swap peak. Apr 14 13:20:43.522460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 14 13:20:43.770919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:20:45.779151 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 14 13:20:45.787987 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 14 13:20:45.846088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:20:46.352193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:20:47.545234 systemd[1]: Reloading requested from client PID 2166 ('systemctl') (unit session-7.scope)... Apr 14 13:20:47.551940 systemd[1]: Reloading... Apr 14 13:20:50.976285 zram_generator::config[2205]: No configuration found. Apr 14 13:20:58.203162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:21:04.104241 systemd[1]: Reloading finished in 16551 ms. Apr 14 13:21:07.046997 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:21:07.152852 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 13:21:07.237204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:21:07.249763 systemd[1]: kubelet.service: Consumed 1.415s CPU time, 31.7M memory peak, 0B memory swap peak. Apr 14 13:21:07.559892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:21:14.733431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:21:14.840862 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:21:21.175699 kubelet[2256]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:21:21.326055 kubelet[2256]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:21:21.326055 kubelet[2256]: I0414 13:21:21.253835 2256 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:21:27.637013 kubelet[2256]: I0414 13:21:27.636326 2256 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 14 13:21:27.637013 kubelet[2256]: I0414 13:21:27.636813 2256 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:21:27.637013 kubelet[2256]: I0414 13:21:27.637210 2256 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 13:21:27.637013 kubelet[2256]: I0414 13:21:27.637237 2256 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:21:27.754166 kubelet[2256]: I0414 13:21:27.750089 2256 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:21:28.316361 kubelet[2256]: E0414 13:21:28.315783 2256 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:21:28.325383 kubelet[2256]: I0414 13:21:28.324367 2256 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:21:29.018464 kubelet[2256]: E0414 13:21:28.961200 2256 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:21:29.039922 kubelet[2256]: I0414 13:21:29.028096 2256 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 13:21:29.735821 kubelet[2256]: I0414 13:21:29.729822 2256 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 13:21:29.929721 kubelet[2256]: I0414 13:21:29.928094 2256 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:21:29.929721 kubelet[2256]: I0414 13:21:29.929390 2256 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 13:21:30.026543 kubelet[2256]: I0414 13:21:29.930323 2256 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:21:30.026543 kubelet[2256]: I0414 13:21:29.930337 2256 container_manager_linux.go:306] "Creating device plugin manager" Apr 14 13:21:30.026543 kubelet[2256]: I0414 13:21:29.979975 2256 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 13:21:30.263768 kubelet[2256]: I0414 13:21:30.261065 2256 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:21:30.367850 kubelet[2256]: I0414 13:21:30.357979 2256 kubelet.go:475] "Attempting to sync node with API server" Apr 14 13:21:30.367850 kubelet[2256]: I0414 13:21:30.359956 2256 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:21:30.367850 kubelet[2256]: I0414 13:21:30.362553 2256 kubelet.go:387] "Adding apiserver pod source" Apr 14 13:21:30.367850 kubelet[2256]: I0414 13:21:30.364288 2256 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:21:30.368544 kubelet[2256]: E0414 13:21:30.368439 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:21:30.368544 kubelet[2256]: E0414 13:21:30.368429 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:21:30.376051 kubelet[2256]: I0414 13:21:30.374162 2256 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:21:30.437097 kubelet[2256]: I0414 13:21:30.431338 2256 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:21:30.452119 kubelet[2256]: I0414 13:21:30.438314 2256 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 13:21:30.452119 kubelet[2256]: W0414 13:21:30.451508 2256 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 13:21:30.772210 kubelet[2256]: E0414 13:21:30.766243 2256 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:21:31.168172 kubelet[2256]: I0414 13:21:31.133822 2256 server.go:1262] "Started kubelet" Apr 14 13:21:31.168172 kubelet[2256]: I0414 13:21:31.149949 2256 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:21:31.309850 kubelet[2256]: I0414 13:21:31.174306 2256 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:21:31.309850 kubelet[2256]: I0414 13:21:31.256782 2256 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 13:21:31.312807 kubelet[2256]: I0414 13:21:31.310291 2256 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:21:31.312807 kubelet[2256]: I0414 13:21:31.310713 2256 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:21:31.315115 kubelet[2256]: I0414 13:21:31.314917 2256 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:21:31.318277 kubelet[2256]: I0414 13:21:31.316059 2256 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 14 13:21:31.318277 kubelet[2256]: E0414 13:21:31.305751 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63bd57af7d04f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,LastTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:21:31.318277 kubelet[2256]: E0414 13:21:31.316795 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:31.318277 kubelet[2256]: I0414 13:21:31.317264 2256 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 13:21:31.318277 kubelet[2256]: I0414 13:21:31.317366 2256 reconciler.go:29] "Reconciler: start to sync state" Apr 14 13:21:31.318277 kubelet[2256]: E0414 13:21:31.317787 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:21:31.318277 kubelet[2256]: E0414 13:21:31.318050 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Apr 14 13:21:31.565194 kubelet[2256]: E0414 13:21:31.557470 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:31.568212 kubelet[2256]: I0414 13:21:31.568153 2256 server.go:310] "Adding debug handlers to kubelet server" Apr 14 13:21:31.568468 kubelet[2256]: E0414 13:21:31.568309 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Apr 14 13:21:31.577221 kubelet[2256]: I0414 13:21:31.568965 2256 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:21:31.649732 kubelet[2256]: I0414 13:21:31.616265 2256 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:21:31.678289 kubelet[2256]: E0414 13:21:31.669258 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:31.813188 kubelet[2256]: E0414 13:21:31.803986 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:31.813188 kubelet[2256]: E0414 13:21:31.813322 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:21:31.961569 kubelet[2256]: E0414 13:21:31.944250 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:31.984386 kubelet[2256]: E0414 13:21:31.983104 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:21:31.984386 kubelet[2256]: E0414 13:21:32.025222 2256 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:21:31.984386 kubelet[2256]: I0414 13:21:32.028323 2256 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:21:32.081161 kubelet[2256]: E0414 13:21:32.079140 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:32.129250 kubelet[2256]: E0414 13:21:32.127307 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Apr 14 13:21:32.268933 kubelet[2256]: E0414 13:21:32.268766 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:32.482811 kubelet[2256]: E0414 13:21:32.460914 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:32.631459 kubelet[2256]: E0414 13:21:32.620306 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:32.777223 kubelet[2256]: E0414 13:21:32.770249 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:32.879147 kubelet[2256]: E0414 13:21:32.877811 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:32.879147 kubelet[2256]: E0414 13:21:32.877831 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:21:32.980516 kubelet[2256]: E0414 13:21:32.979120 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:32.980860 kubelet[2256]: E0414 13:21:32.980652 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Apr 14 13:21:33.121390 kubelet[2256]: E0414 13:21:33.120851 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:33.239825 kubelet[2256]: E0414 13:21:33.239313 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:33.351212 kubelet[2256]: E0414 13:21:33.348811 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:33.447039 kubelet[2256]: I0414 13:21:33.430122 2256 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 13:21:33.447039 kubelet[2256]: I0414 13:21:33.431470 2256 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:21:33.461357 kubelet[2256]: I0414 13:21:33.452128 2256 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:21:33.461357 kubelet[2256]: E0414 13:21:33.453118 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:33.461357 kubelet[2256]: I0414 13:21:33.453446 2256 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:21:33.541179 kubelet[2256]: I0414 13:21:33.515426 2256 policy_none.go:49] "None policy: Start" Apr 14 13:21:33.541179 kubelet[2256]: I0414 13:21:33.534928 2256 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 13:21:33.541179 kubelet[2256]: I0414 13:21:33.536474 2256 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 13:21:33.649794 kubelet[2256]: I0414 13:21:33.547323 2256 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 13:21:33.649794 kubelet[2256]: I0414 13:21:33.575882 2256 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 14 13:21:33.649794 kubelet[2256]: E0414 13:21:33.582210 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:33.649794 kubelet[2256]: I0414 13:21:33.583198 2256 kubelet.go:2428] "Starting kubelet main sync loop" Apr 14 13:21:33.649794 kubelet[2256]: I0414 13:21:33.637463 2256 policy_none.go:47] "Start" Apr 14 13:21:33.720371 kubelet[2256]: E0414 13:21:33.651365 2256 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:21:33.756294 kubelet[2256]: E0414 13:21:33.740336 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:33.905953 kubelet[2256]: E0414 13:21:33.831263 2256 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:21:33.956117 kubelet[2256]: E0414 13:21:33.952456 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:21:34.081480 kubelet[2256]: E0414 13:21:33.964133 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:21:34.081480 kubelet[2256]: E0414 13:21:34.038722 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:34.081480 kubelet[2256]: E0414 13:21:34.072059 2256 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:21:34.227970 kubelet[2256]: E0414 13:21:34.204786 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:34.386037 kubelet[2256]: E0414 13:21:34.377371 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:34.542131 kubelet[2256]: E0414 13:21:34.396420 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:21:34.542131 kubelet[2256]: E0414 13:21:34.541788 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:34.553374 kubelet[2256]: E0414 13:21:34.552974 2256 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:21:34.651304 kubelet[2256]: E0414 13:21:34.644174 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:34.948305 kubelet[2256]: E0414 13:21:34.943435 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:35.010272 kubelet[2256]: E0414 13:21:34.870849 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="3.2s" Apr 14 13:21:35.088822 kubelet[2256]: E0414 13:21:35.078433 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:35.338254 kubelet[2256]: E0414 13:21:35.313941 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:35.434300 kubelet[2256]: E0414 13:21:35.427418 2256 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:21:35.643426 kubelet[2256]: E0414 13:21:35.570563 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:35.781311 kubelet[2256]: E0414 13:21:35.779913 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:35.829419 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 13:21:36.361127 kubelet[2256]: E0414 13:21:36.059399 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:36.460542 kubelet[2256]: E0414 13:21:36.189939 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63bd57af7d04f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,LastTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:21:36.528951 kubelet[2256]: E0414 13:21:36.485074 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:36.528951 kubelet[2256]: E0414 13:21:36.500246 2256 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:21:36.528951 kubelet[2256]: E0414 13:21:36.519462 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:21:36.656215 kubelet[2256]: E0414 13:21:36.593232 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:21:36.784085 kubelet[2256]: E0414 13:21:36.781962 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:36.890813 kubelet[2256]: E0414 13:21:36.888478 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:37.043534 kubelet[2256]: E0414 13:21:37.041817 2256 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:21:37.043534 kubelet[2256]: E0414 13:21:37.041802 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:37.248419 kubelet[2256]: E0414 13:21:37.232344 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:37.441399 kubelet[2256]: E0414 13:21:37.366801 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:37.534396 kubelet[2256]: E0414 13:21:37.532895 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:37.728749 kubelet[2256]: E0414 13:21:37.639207 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:37.831292 kubelet[2256]: E0414 13:21:37.783476 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:37.917937 kubelet[2256]: E0414 13:21:37.917610 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:21:37.956458 kubelet[2256]: E0414 13:21:37.954400 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:38.074485 kubelet[2256]: E0414 13:21:38.069972 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:38.183177 kubelet[2256]: E0414 13:21:38.181558 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:38.206703 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 13:21:38.276484 kubelet[2256]: E0414 13:21:38.273262 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="6.4s" Apr 14 13:21:38.335994 kubelet[2256]: E0414 13:21:38.290253 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:38.431353 kubelet[2256]: E0414 13:21:38.430833 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:38.566364 kubelet[2256]: E0414 13:21:38.560360 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:38.678528 kubelet[2256]: E0414 13:21:38.677952 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:38.679053 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 13:21:38.825249 kubelet[2256]: E0414 13:21:38.817525 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:38.930873 kubelet[2256]: E0414 13:21:38.929696 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:39.043106 kubelet[2256]: E0414 13:21:39.040634 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:21:39.108266 kubelet[2256]: E0414 13:21:39.049513 2256 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:21:39.121461 kubelet[2256]: I0414 13:21:39.114363 2256 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:21:39.121461 kubelet[2256]: I0414 13:21:39.116793 2256 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:21:39.122370 kubelet[2256]: I0414 13:21:39.122269 2256 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:21:39.437642 kubelet[2256]: E0414 13:21:39.437272 2256 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:21:39.451261 kubelet[2256]: E0414 13:21:39.443058 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:21:39.451261 kubelet[2256]: I0414 13:21:39.437361 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:21:39.544184 kubelet[2256]: E0414 13:21:39.488275 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:21:39.566671 kubelet[2256]: E0414 13:21:39.564774 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:21:39.806548 kubelet[2256]: E0414 13:21:39.806155 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:21:39.808114 kubelet[2256]: I0414 13:21:39.807693 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:21:39.815398 kubelet[2256]: E0414 13:21:39.808234 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:21:40.278775 kubelet[2256]: I0414 13:21:40.278451 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:21:40.299810 kubelet[2256]: E0414 13:21:40.298533 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:21:40.381941 kubelet[2256]: I0414 13:21:40.376361 2256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef4c7b0b14aacb703d6788ed41a925d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3ef4c7b0b14aacb703d6788ed41a925d\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:21:40.383193 kubelet[2256]: I0414 13:21:40.382545 2256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/385147077aa7ba6cb9a6805ee8a5b732-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"385147077aa7ba6cb9a6805ee8a5b732\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:21:40.383193 kubelet[2256]: I0414 13:21:40.382656 2256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/385147077aa7ba6cb9a6805ee8a5b732-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"385147077aa7ba6cb9a6805ee8a5b732\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:21:40.383193 kubelet[2256]: I0414 13:21:40.382806 2256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/385147077aa7ba6cb9a6805ee8a5b732-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"385147077aa7ba6cb9a6805ee8a5b732\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:21:40.634535 kubelet[2256]: I0414 13:21:40.634229 2256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:21:40.634535 kubelet[2256]: I0414 13:21:40.634407 2256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:21:40.634535 kubelet[2256]: I0414 13:21:40.634433 2256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:21:40.634535 kubelet[2256]: I0414 13:21:40.634556 2256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:21:40.634535 kubelet[2256]: I0414 13:21:40.634615 2256 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:21:41.305192 kubelet[2256]: E0414 13:21:41.302260 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:21:41.311915 systemd[1]: Created slice kubepods-burstable-pod3ef4c7b0b14aacb703d6788ed41a925d.slice - libcontainer container kubepods-burstable-pod3ef4c7b0b14aacb703d6788ed41a925d.slice. Apr 14 13:21:41.327247 kubelet[2256]: I0414 13:21:41.327163 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:21:41.327784 kubelet[2256]: E0414 13:21:41.327718 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:21:41.887083 kubelet[2256]: E0414 13:21:41.883123 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:21:42.044030 kubelet[2256]: E0414 13:21:42.043427 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:21:42.523291 containerd[1468]: time="2026-04-14T13:21:42.522739312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3ef4c7b0b14aacb703d6788ed41a925d,Namespace:kube-system,Attempt:0,}" Apr 14 13:21:42.878628 systemd[1]: Created slice kubepods-burstable-pod385147077aa7ba6cb9a6805ee8a5b732.slice - libcontainer container kubepods-burstable-pod385147077aa7ba6cb9a6805ee8a5b732.slice. Apr 14 13:21:43.045819 kubelet[2256]: I0414 13:21:43.045659 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:21:43.316958 kubelet[2256]: E0414 13:21:43.313649 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:21:43.342327 kubelet[2256]: E0414 13:21:43.342169 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:21:43.349609 systemd[1]: Created slice kubepods-burstable-poddc6a32a2019cd173b38de969cf403b25.slice - libcontainer container kubepods-burstable-poddc6a32a2019cd173b38de969cf403b25.slice. Apr 14 13:21:43.357490 kubelet[2256]: E0414 13:21:43.357298 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:21:43.474983 containerd[1468]: time="2026-04-14T13:21:43.472462046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:385147077aa7ba6cb9a6805ee8a5b732,Namespace:kube-system,Attempt:0,}" Apr 14 13:21:43.532976 kubelet[2256]: E0414 13:21:43.527298 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:21:43.710181 kubelet[2256]: E0414 13:21:43.708988 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:21:43.761418 kubelet[2256]: E0414 13:21:43.741837 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:21:43.831867 containerd[1468]: time="2026-04-14T13:21:43.830435447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dc6a32a2019cd173b38de969cf403b25,Namespace:kube-system,Attempt:0,}" Apr 14 13:21:44.678808 kubelet[2256]: E0414 13:21:44.663483 2256 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:21:44.840408 kubelet[2256]: E0414 13:21:44.793265 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 14 13:21:46.512012 kubelet[2256]: E0414 13:21:46.507467 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63bd57af7d04f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,LastTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:21:47.071039 kubelet[2256]: E0414 13:21:47.053920 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:21:47.094337 kubelet[2256]: I0414 13:21:47.093748 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:21:47.371777 kubelet[2256]: E0414 13:21:47.350211 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:21:49.128217 kubelet[2256]: E0414 13:21:49.122806 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:21:49.533945 kubelet[2256]: E0414 13:21:49.485855 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:21:50.747629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740849397.mount: Deactivated successfully. Apr 14 13:21:50.770444 containerd[1468]: time="2026-04-14T13:21:50.766018097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:21:50.928215 containerd[1468]: time="2026-04-14T13:21:50.846038436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 13:21:50.928215 containerd[1468]: time="2026-04-14T13:21:50.862117872Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:21:50.944701 kubelet[2256]: E0414 13:21:50.833170 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:21:51.181162 containerd[1468]: time="2026-04-14T13:21:51.152900353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:21:51.181162 containerd[1468]: time="2026-04-14T13:21:51.165176743Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:21:51.331079 containerd[1468]: time="2026-04-14T13:21:51.319110675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:21:51.566010 kubelet[2256]: E0414 13:21:51.559328 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:21:51.756856 containerd[1468]: time="2026-04-14T13:21:51.756306912Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:21:52.137385 kubelet[2256]: E0414 13:21:52.125456 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 14 13:21:54.587395 kubelet[2256]: I0414 13:21:54.568280 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:21:54.787415 containerd[1468]: time="2026-04-14T13:21:54.587425670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:21:55.087969 kubelet[2256]: E0414 13:21:55.060052 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:21:56.418268 containerd[1468]: time="2026-04-14T13:21:56.411920920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 13.839798503s" Apr 14 13:21:56.983320 kubelet[2256]: E0414 13:21:56.932941 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63bd57af7d04f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,LastTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:21:58.397405 containerd[1468]: time="2026-04-14T13:21:58.393223649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 14.56258943s" Apr 14 13:21:58.912976 containerd[1468]: time="2026-04-14T13:21:58.905320119Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 15.405722521s" Apr 14 13:21:59.560157 kubelet[2256]: E0414 13:21:59.557158 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:22:00.178507 kubelet[2256]: E0414 13:22:00.157964 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 14 13:22:01.882428 kubelet[2256]: E0414 13:22:01.878946 2256 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:22:02.267128 kubelet[2256]: E0414 13:22:02.145425 2256 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:22:04.966073 kubelet[2256]: I0414 13:22:04.934286 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:22:06.674157 kubelet[2256]: E0414 13:22:06.673952 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:22:06.674157 kubelet[2256]: E0414 13:22:06.674197 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:22:07.078340 kubelet[2256]: E0414 13:22:06.881460 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:22:07.182149 kubelet[2256]: E0414 13:22:07.082190 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:22:07.583534 containerd[1468]: time="2026-04-14T13:22:07.432190262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:22:07.583534 containerd[1468]: time="2026-04-14T13:22:07.504704291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:22:07.583534 containerd[1468]: time="2026-04-14T13:22:07.567154217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:22:07.959079 containerd[1468]: time="2026-04-14T13:22:07.635058011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:22:08.105224 kubelet[2256]: E0414 13:22:07.567441 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63bd57af7d04f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,LastTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:22:08.518433 kubelet[2256]: E0414 13:22:07.959218 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 14 13:22:09.622472 kubelet[2256]: E0414 13:22:09.590542 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:22:10.071527 containerd[1468]: time="2026-04-14T13:22:09.970537463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:22:10.299268 containerd[1468]: time="2026-04-14T13:22:10.130382986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:22:10.299268 containerd[1468]: time="2026-04-14T13:22:10.228419555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:22:10.321087 containerd[1468]: time="2026-04-14T13:22:10.299461848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:22:11.028906 containerd[1468]: time="2026-04-14T13:22:11.015468381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:22:11.028906 containerd[1468]: time="2026-04-14T13:22:11.023916785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:22:11.028906 containerd[1468]: time="2026-04-14T13:22:11.023940261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:22:11.048332 containerd[1468]: time="2026-04-14T13:22:11.041154588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:22:11.530994 systemd[1]: Started cri-containerd-c8792d688dae0a5b898d7dc20a924450873c3baa5fd1d62047ff9c93b0df50ad.scope - libcontainer container c8792d688dae0a5b898d7dc20a924450873c3baa5fd1d62047ff9c93b0df50ad. Apr 14 13:22:13.643519 systemd[1]: Started cri-containerd-54d8fefb627e479c232a46693b13d7ebf84ba0e52d78ce91f605d11c82b235af.scope - libcontainer container 54d8fefb627e479c232a46693b13d7ebf84ba0e52d78ce91f605d11c82b235af. Apr 14 13:22:14.038819 systemd[1]: Started cri-containerd-fc11938dac96ce5e8729823fd16f92c1d4e686f6e42f514f202da7cb60a54e1a.scope - libcontainer container fc11938dac96ce5e8729823fd16f92c1d4e686f6e42f514f202da7cb60a54e1a. Apr 14 13:22:14.992516 containerd[1468]: time="2026-04-14T13:22:14.988005440Z" level=error msg="get state for c8792d688dae0a5b898d7dc20a924450873c3baa5fd1d62047ff9c93b0df50ad" error="context deadline exceeded: unknown" Apr 14 13:22:15.082038 containerd[1468]: time="2026-04-14T13:22:15.037474008Z" level=warning msg="unknown status" status=0 Apr 14 13:22:15.101063 kubelet[2256]: I0414 13:22:15.081549 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:22:15.179884 kubelet[2256]: E0414 13:22:15.172799 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:22:15.805503 kubelet[2256]: E0414 13:22:15.805000 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 14 13:22:15.863048 containerd[1468]: time="2026-04-14T13:22:15.796236900Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 13:22:17.158899 kubelet[2256]: E0414 13:22:17.144667 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:22:17.727162 containerd[1468]: time="2026-04-14T13:22:17.721378099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:385147077aa7ba6cb9a6805ee8a5b732,Namespace:kube-system,Attempt:0,} returns sandbox id \"54d8fefb627e479c232a46693b13d7ebf84ba0e52d78ce91f605d11c82b235af\"" Apr 14 13:22:17.901473 containerd[1468]: time="2026-04-14T13:22:17.901061861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3ef4c7b0b14aacb703d6788ed41a925d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8792d688dae0a5b898d7dc20a924450873c3baa5fd1d62047ff9c93b0df50ad\"" Apr 14 13:22:17.987342 kubelet[2256]: E0414 13:22:17.981975 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:22:18.153913 kubelet[2256]: E0414 13:22:18.153372 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:22:18.214013 containerd[1468]: time="2026-04-14T13:22:18.206178199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dc6a32a2019cd173b38de969cf403b25,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc11938dac96ce5e8729823fd16f92c1d4e686f6e42f514f202da7cb60a54e1a\"" Apr 14 13:22:18.329522 kubelet[2256]: E0414 13:22:18.242545 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63bd57af7d04f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,LastTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:22:18.406260 kubelet[2256]: E0414 13:22:18.393084 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:22:18.974135 containerd[1468]: time="2026-04-14T13:22:18.973420704Z" level=info msg="CreateContainer within sandbox \"c8792d688dae0a5b898d7dc20a924450873c3baa5fd1d62047ff9c93b0df50ad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 13:22:19.191984 containerd[1468]: time="2026-04-14T13:22:18.988257099Z" level=info msg="CreateContainer within sandbox \"54d8fefb627e479c232a46693b13d7ebf84ba0e52d78ce91f605d11c82b235af\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 13:22:19.274885 containerd[1468]: time="2026-04-14T13:22:19.274038320Z" level=info msg="CreateContainer within sandbox \"fc11938dac96ce5e8729823fd16f92c1d4e686f6e42f514f202da7cb60a54e1a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 13:22:19.679218 kubelet[2256]: E0414 13:22:19.631412 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:22:20.751898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792175177.mount: Deactivated successfully. Apr 14 13:22:22.702570 containerd[1468]: time="2026-04-14T13:22:22.691062668Z" level=info msg="CreateContainer within sandbox \"54d8fefb627e479c232a46693b13d7ebf84ba0e52d78ce91f605d11c82b235af\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d1327201afb538b0eee2b5b923678d22f235990125a1e1277714810b7b31f060\"" Apr 14 13:22:23.130508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531695243.mount: Deactivated successfully. Apr 14 13:22:23.380393 kubelet[2256]: E0414 13:22:23.370181 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 14 13:22:23.664756 containerd[1468]: time="2026-04-14T13:22:23.595451445Z" level=info msg="CreateContainer within sandbox \"c8792d688dae0a5b898d7dc20a924450873c3baa5fd1d62047ff9c93b0df50ad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9eb30e823423e6227707a2ff0ce50949a8c3d2c3edef696c0b2000c3c1f7cb3d\"" Apr 14 13:22:23.722326 containerd[1468]: time="2026-04-14T13:22:23.718331285Z" level=info msg="CreateContainer within sandbox \"fc11938dac96ce5e8729823fd16f92c1d4e686f6e42f514f202da7cb60a54e1a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3\"" Apr 14 13:22:23.839521 kubelet[2256]: I0414 13:22:23.825549 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:22:23.866047 containerd[1468]: time="2026-04-14T13:22:23.865944683Z" level=info msg="StartContainer for \"9eb30e823423e6227707a2ff0ce50949a8c3d2c3edef696c0b2000c3c1f7cb3d\"" Apr 14 13:22:23.959072 containerd[1468]: time="2026-04-14T13:22:23.868532765Z" level=info msg="StartContainer for \"d1327201afb538b0eee2b5b923678d22f235990125a1e1277714810b7b31f060\"" Apr 14 13:22:23.959072 containerd[1468]: time="2026-04-14T13:22:23.872231279Z" level=info msg="StartContainer for \"0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3\"" Apr 14 13:22:24.019439 kubelet[2256]: E0414 13:22:23.932399 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Apr 14 13:22:28.452521 kubelet[2256]: E0414 13:22:28.427489 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63bd57af7d04f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,LastTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:22:29.696285 kubelet[2256]: E0414 13:22:29.691374 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:22:30.163254 systemd[1]: Started cri-containerd-0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3.scope - libcontainer container 0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3. Apr 14 13:22:30.625541 systemd[1]: Started cri-containerd-9eb30e823423e6227707a2ff0ce50949a8c3d2c3edef696c0b2000c3c1f7cb3d.scope - libcontainer container 9eb30e823423e6227707a2ff0ce50949a8c3d2c3edef696c0b2000c3c1f7cb3d. Apr 14 13:22:31.033486 systemd[1]: Started cri-containerd-d1327201afb538b0eee2b5b923678d22f235990125a1e1277714810b7b31f060.scope - libcontainer container d1327201afb538b0eee2b5b923678d22f235990125a1e1277714810b7b31f060. Apr 14 13:22:35.005102 containerd[1468]: time="2026-04-14T13:22:35.000548595Z" level=error msg="get state for 0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3" error="context deadline exceeded: unknown" Apr 14 13:22:35.412520 containerd[1468]: time="2026-04-14T13:22:35.115030565Z" level=warning msg="unknown status" status=0 Apr 14 13:22:35.442482 containerd[1468]: time="2026-04-14T13:22:35.438348528Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 13:22:36.290056 kubelet[2256]: E0414 13:22:35.677935 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="7s" Apr 14 13:22:38.602527 kubelet[2256]: E0414 13:22:38.601040 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:22:41.673097 containerd[1468]: time="2026-04-14T13:22:41.658435272Z" level=info msg="StartContainer for \"0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3\" returns successfully" Apr 14 13:22:42.098536 containerd[1468]: time="2026-04-14T13:22:42.065254258Z" level=info msg="StartContainer for \"d1327201afb538b0eee2b5b923678d22f235990125a1e1277714810b7b31f060\" returns successfully" Apr 14 13:22:42.975004 kubelet[2256]: E0414 13:22:42.141567 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:22:43.469500 containerd[1468]: time="2026-04-14T13:22:43.272371034Z" level=info msg="StartContainer for \"9eb30e823423e6227707a2ff0ce50949a8c3d2c3edef696c0b2000c3c1f7cb3d\" returns successfully" Apr 14 13:22:44.367597 kubelet[2256]: E0414 13:22:43.714623 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63bd57af7d04f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,LastTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:22:44.883057 kubelet[2256]: E0414 13:22:44.437132 2256 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:22:46.668168 kubelet[2256]: I0414 13:22:46.655450 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:22:52.755607 kubelet[2256]: E0414 13:22:52.742449 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:22:53.075034 kubelet[2256]: E0414 13:22:53.033623 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:22:53.648105 kubelet[2256]: E0414 13:22:53.612268 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:22:54.231471 kubelet[2256]: E0414 13:22:54.227128 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:22:55.158499 kubelet[2256]: E0414 13:22:55.157380 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:22:55.290545 kubelet[2256]: E0414 13:22:55.288737 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:22:55.486273 kubelet[2256]: E0414 13:22:55.441205 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:22:55.773693 kubelet[2256]: E0414 13:22:55.768373 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:22:58.214983 kubelet[2256]: E0414 13:22:58.156356 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:22:58.565296 kubelet[2256]: E0414 13:22:58.549540 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:22:59.058154 kubelet[2256]: E0414 13:22:59.051329 2256 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 14 13:23:00.332373 kubelet[2256]: E0414 13:23:00.329405 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:23:00.332373 kubelet[2256]: E0414 13:23:00.329522 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:23:00.625504 kubelet[2256]: E0414 13:23:00.400313 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:00.625504 kubelet[2256]: E0414 13:23:00.461488 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:02.035098 kubelet[2256]: E0414 13:23:01.959797 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:23:02.066990 kubelet[2256]: E0414 13:23:02.055695 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:23:02.066990 kubelet[2256]: E0414 13:23:02.056553 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:02.122519 kubelet[2256]: E0414 13:23:02.118663 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:03.351307 kubelet[2256]: E0414 13:23:03.350918 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:23:03.436607 kubelet[2256]: E0414 13:23:03.434772 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:03.689242 kubelet[2256]: E0414 13:23:03.614300 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:23:03.737191 kubelet[2256]: E0414 13:23:03.714296 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:03.902903 kubelet[2256]: E0414 13:23:03.897366 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:23:03.908891 kubelet[2256]: E0414 13:23:03.904076 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:04.282474 kubelet[2256]: E0414 13:23:04.257033 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:23:04.399865 kubelet[2256]: E0414 13:23:04.395318 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:23:04.409497 kubelet[2256]: E0414 13:23:04.407292 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:04.641249 kubelet[2256]: E0414 13:23:04.637665 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:23:04.883453 kubelet[2256]: E0414 13:23:04.862049 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a63bd57af7d04f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,LastTimestamp:2026-04-14 13:21:31.121373263 +0000 UTC m=+16.090485465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:23:04.973129 kubelet[2256]: E0414 13:23:04.969409 2256 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:23:06.373485 kubelet[2256]: I0414 13:23:06.362288 2256 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:23:10.737039 kubelet[2256]: E0414 13:23:10.735853 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 14 13:23:14.436495 kubelet[2256]: E0414 13:23:14.392934 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:23:15.979214 kubelet[2256]: E0414 13:23:15.972776 2256 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:23:16.136439 kubelet[2256]: E0414 13:23:16.136012 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:16.505660 kubelet[2256]: I0414 13:23:16.503838 2256 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:23:16.554314 kubelet[2256]: E0414 13:23:16.530090 2256 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 13:23:18.226940 kubelet[2256]: E0414 13:23:18.226521 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:18.378936 kubelet[2256]: E0414 13:23:18.372084 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:18.561412 kubelet[2256]: E0414 13:23:18.530140 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:18.652275 kubelet[2256]: E0414 13:23:18.650130 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:18.785184 kubelet[2256]: E0414 13:23:18.780733 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:18.975365 kubelet[2256]: E0414 13:23:18.946243 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:19.194733 kubelet[2256]: E0414 13:23:19.193750 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:19.373911 kubelet[2256]: E0414 13:23:19.373132 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:19.572560 kubelet[2256]: E0414 13:23:19.550840 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:19.741653 kubelet[2256]: E0414 13:23:19.739234 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:19.853107 kubelet[2256]: E0414 13:23:19.852143 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:20.088051 kubelet[2256]: E0414 13:23:20.034056 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:20.167617 kubelet[2256]: E0414 13:23:20.165074 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:20.402656 kubelet[2256]: E0414 13:23:20.364546 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:20.679370 kubelet[2256]: E0414 13:23:20.589407 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:20.840556 kubelet[2256]: E0414 13:23:20.834428 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:21.062683 kubelet[2256]: E0414 13:23:21.044441 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:21.269631 kubelet[2256]: E0414 13:23:21.187481 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:21.473396 kubelet[2256]: E0414 13:23:21.453895 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:21.850275 kubelet[2256]: E0414 13:23:21.644145 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:22.006568 kubelet[2256]: E0414 13:23:22.000329 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:22.245194 kubelet[2256]: E0414 13:23:22.242570 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:22.468129 kubelet[2256]: E0414 13:23:22.432214 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:22.729586 kubelet[2256]: E0414 13:23:22.633731 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:22.767021 kubelet[2256]: E0414 13:23:22.760503 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:23.056725 kubelet[2256]: E0414 13:23:22.980715 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:23.111916 kubelet[2256]: E0414 13:23:23.107633 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:23.241018 kubelet[2256]: E0414 13:23:23.236968 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:23.378458 kubelet[2256]: E0414 13:23:23.360565 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:23.542229 kubelet[2256]: E0414 13:23:23.532143 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:23.635690 kubelet[2256]: E0414 13:23:23.635350 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:23.761782 kubelet[2256]: E0414 13:23:23.750924 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:23.884759 kubelet[2256]: E0414 13:23:23.882561 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:24.039641 kubelet[2256]: E0414 13:23:24.028543 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:24.178468 kubelet[2256]: E0414 13:23:24.177663 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:24.451441 kubelet[2256]: E0414 13:23:24.439566 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:24.561177 kubelet[2256]: E0414 13:23:24.558419 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:24.772765 kubelet[2256]: E0414 13:23:24.563917 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:23:24.774567 kubelet[2256]: E0414 13:23:24.690254 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:24.947695 kubelet[2256]: E0414 13:23:24.938082 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:25.075212 kubelet[2256]: E0414 13:23:25.074196 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:25.189098 kubelet[2256]: E0414 13:23:25.188647 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:25.367477 kubelet[2256]: E0414 13:23:25.348529 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:25.469330 kubelet[2256]: E0414 13:23:25.463559 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:25.599475 kubelet[2256]: E0414 13:23:25.597181 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:25.751115 kubelet[2256]: E0414 13:23:25.733990 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:25.851252 kubelet[2256]: E0414 13:23:25.844287 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:25.966130 kubelet[2256]: E0414 13:23:25.960993 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:26.080705 kubelet[2256]: E0414 13:23:26.071397 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:26.177292 kubelet[2256]: E0414 13:23:26.174110 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:26.280408 kubelet[2256]: E0414 13:23:26.279276 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:26.432636 kubelet[2256]: E0414 13:23:26.388698 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:26.568462 kubelet[2256]: E0414 13:23:26.560039 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:26.713460 kubelet[2256]: E0414 13:23:26.712237 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:26.831265 kubelet[2256]: E0414 13:23:26.828269 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:26.984982 kubelet[2256]: E0414 13:23:26.963532 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:27.174142 kubelet[2256]: E0414 13:23:27.156806 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:27.328714 kubelet[2256]: E0414 13:23:27.279449 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:27.444022 kubelet[2256]: E0414 13:23:27.443641 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:27.569479 kubelet[2256]: E0414 13:23:27.564250 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:27.841404 kubelet[2256]: E0414 13:23:27.840828 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:28.003194 kubelet[2256]: E0414 13:23:28.002345 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:28.273998 kubelet[2256]: E0414 13:23:28.246093 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:28.497602 kubelet[2256]: E0414 13:23:28.495895 2256 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 13:23:29.408868 kubelet[2256]: E0414 13:23:29.390528 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:29.534809 kubelet[2256]: E0414 13:23:29.532939 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:29.646564 kubelet[2256]: E0414 13:23:29.644316 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:29.749368 kubelet[2256]: E0414 13:23:29.747180 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:29.889170 kubelet[2256]: E0414 13:23:29.888326 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:30.031779 kubelet[2256]: E0414 13:23:30.022514 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:30.128946 kubelet[2256]: E0414 13:23:30.128152 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:30.293491 kubelet[2256]: E0414 13:23:30.271354 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:30.394174 kubelet[2256]: E0414 13:23:30.392071 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:30.503925 kubelet[2256]: E0414 13:23:30.503241 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:30.636198 kubelet[2256]: E0414 13:23:30.617224 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:30.752399 kubelet[2256]: E0414 13:23:30.750726 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:30.946678 kubelet[2256]: E0414 13:23:30.874434 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:31.081544 kubelet[2256]: E0414 13:23:31.073911 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:31.259765 kubelet[2256]: E0414 13:23:31.194067 2256 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:23:31.643332 kubelet[2256]: E0414 13:23:31.630307 2256 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Apr 14 13:23:33.326054 kubelet[2256]: E0414 13:23:33.325529 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:34.661605 kubelet[2256]: E0414 13:23:34.659417 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:23:38.939053 kubelet[2256]: E0414 13:23:38.938372 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:39.776242 kubelet[2256]: E0414 13:23:39.773346 2256 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 13:23:44.062836 kubelet[2256]: E0414 13:23:44.062372 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:44.789384 kubelet[2256]: E0414 13:23:44.780758 2256 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:23:48.351297 systemd[1]: cri-containerd-0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3.scope: Deactivated successfully. Apr 14 13:23:48.362133 systemd[1]: cri-containerd-0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3.scope: Consumed 9.049s CPU time. Apr 14 13:23:49.449250 kubelet[2256]: E0414 13:23:49.438293 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:50.032828 kubelet[2256]: E0414 13:23:50.032061 2256 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 13:23:51.265491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3-rootfs.mount: Deactivated successfully. Apr 14 13:23:51.390955 containerd[1468]: time="2026-04-14T13:23:51.390675017Z" level=info msg="shim disconnected" id=0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3 namespace=k8s.io Apr 14 13:23:51.458822 containerd[1468]: time="2026-04-14T13:23:51.420562671Z" level=warning msg="cleaning up after shim disconnected" id=0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3 namespace=k8s.io Apr 14 13:23:51.458822 containerd[1468]: time="2026-04-14T13:23:51.420979469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:23:53.251036 kubelet[2256]: I0414 13:23:53.236900 2256 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:23:53.845347 kubelet[2256]: I0414 13:23:53.844215 2256 apiserver.go:52] "Watching apiserver" Apr 14 13:23:54.174466 kubelet[2256]: I0414 13:23:54.172882 2256 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:23:54.277998 kubelet[2256]: I0414 13:23:54.273165 2256 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:23:54.444694 kubelet[2256]: I0414 13:23:54.347231 2256 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 13:23:54.672894 kubelet[2256]: E0414 13:23:54.668196 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:54.752399 kubelet[2256]: E0414 13:23:54.687501 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:54.960264 kubelet[2256]: I0414 13:23:54.959542 2256 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:23:55.266522 kubelet[2256]: E0414 13:23:55.263506 2256 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 13:23:55.357988 kubelet[2256]: I0414 13:23:55.342689 2256 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:23:55.522165 kubelet[2256]: E0414 13:23:55.371403 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:55.860631 kubelet[2256]: E0414 13:23:55.834886 2256 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:23:55.891272 kubelet[2256]: I0414 13:23:55.883740 2256 scope.go:117] "RemoveContainer" containerID="0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3" Apr 14 13:23:55.920108 kubelet[2256]: E0414 13:23:55.918083 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:56.736156 kubelet[2256]: E0414 13:23:56.733140 2256 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.126s" Apr 14 13:23:57.013875 kubelet[2256]: E0414 13:23:56.768664 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:57.023374 containerd[1468]: time="2026-04-14T13:23:56.858994954Z" level=info msg="CreateContainer within sandbox \"fc11938dac96ce5e8729823fd16f92c1d4e686f6e42f514f202da7cb60a54e1a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 14 13:23:57.192682 kubelet[2256]: I0414 13:23:57.190008 2256 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.189970145 podStartE2EDuration="4.189970145s" podCreationTimestamp="2026-04-14 13:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:23:55.981705021 +0000 UTC m=+160.950817232" watchObservedRunningTime="2026-04-14 13:23:57.189970145 +0000 UTC m=+162.159082335" Apr 14 13:23:57.192682 kubelet[2256]: I0414 13:23:57.190139 2256 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.1901354729999998 podStartE2EDuration="3.190135473s" podCreationTimestamp="2026-04-14 13:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:23:57.189770278 +0000 UTC m=+162.158882475" watchObservedRunningTime="2026-04-14 13:23:57.190135473 +0000 UTC m=+162.159247674" Apr 14 13:23:58.271045 containerd[1468]: time="2026-04-14T13:23:58.244360720Z" level=info msg="CreateContainer within sandbox \"fc11938dac96ce5e8729823fd16f92c1d4e686f6e42f514f202da7cb60a54e1a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455\"" Apr 14 13:23:58.561055 containerd[1468]: time="2026-04-14T13:23:58.544528826Z" level=info msg="StartContainer for \"8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455\"" Apr 14 13:23:58.727597 kubelet[2256]: E0414 13:23:58.727430 2256 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.085s" Apr 14 13:23:59.918167 kubelet[2256]: E0414 13:23:59.917934 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:00.322291 systemd[1]: Started cri-containerd-8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455.scope - libcontainer container 8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455. Apr 14 13:24:00.473352 systemd[1]: Reloading requested from client PID 2612 ('systemctl') (unit session-7.scope)... Apr 14 13:24:00.474759 systemd[1]: Reloading... Apr 14 13:24:00.537131 kubelet[2256]: E0414 13:24:00.533312 2256 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:02.958142 containerd[1468]: time="2026-04-14T13:24:02.957012738Z" level=error msg="get state for 8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455" error="context deadline exceeded: unknown" Apr 14 13:24:02.958142 containerd[1468]: time="2026-04-14T13:24:02.957283191Z" level=warning msg="unknown status" status=0 Apr 14 13:24:05.069750 zram_generator::config[2660]: No configuration found. Apr 14 13:24:05.491130 containerd[1468]: time="2026-04-14T13:24:05.490664481Z" level=error msg="get state for 8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455" error="context deadline exceeded: unknown" Apr 14 13:24:05.491130 containerd[1468]: time="2026-04-14T13:24:05.490991204Z" level=warning msg="unknown status" status=0 Apr 14 13:24:05.620287 kubelet[2256]: E0414 13:24:05.541267 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:05.620287 kubelet[2256]: E0414 13:24:05.560009 2256 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.964s" Apr 14 13:24:07.942832 containerd[1468]: time="2026-04-14T13:24:07.940476402Z" level=error msg="get state for 8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455" error="context deadline exceeded: unknown" Apr 14 13:24:07.956363 containerd[1468]: time="2026-04-14T13:24:07.944660051Z" level=warning msg="unknown status" status=0 Apr 14 13:24:10.628022 kubelet[2256]: E0414 13:24:10.627727 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:10.929146 containerd[1468]: time="2026-04-14T13:24:10.894184690Z" level=error msg="get state for 8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455" error="context deadline exceeded: unknown" Apr 14 13:24:10.929146 containerd[1468]: time="2026-04-14T13:24:10.894625493Z" level=warning msg="unknown status" status=0 Apr 14 13:24:13.760931 containerd[1468]: time="2026-04-14T13:24:13.754912619Z" level=error msg="get state for 8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455" error="context deadline exceeded: unknown" Apr 14 13:24:13.875318 containerd[1468]: time="2026-04-14T13:24:13.781977767Z" level=warning msg="unknown status" status=0 Apr 14 13:24:13.838286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:24:15.896413 kubelet[2256]: E0414 13:24:15.895744 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:17.490093 containerd[1468]: time="2026-04-14T13:24:17.488029976Z" level=error msg="get state for 8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455" error="context deadline exceeded: unknown" Apr 14 13:24:17.573333 containerd[1468]: time="2026-04-14T13:24:17.559350265Z" level=warning msg="unknown status" status=0 Apr 14 13:24:17.593526 kubelet[2256]: W0414 13:24:17.590262 2256 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc6a32a2019cd173b38de969cf403b25.slice/cri-containerd-8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455.scope WatchSource:0}: containerd task is in unknown state Apr 14 13:24:19.751968 systemd[1]: Reloading finished in 19268 ms. Apr 14 13:24:21.055825 kubelet[2256]: E0414 13:24:21.055476 2256 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:21.183350 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:24:21.308166 containerd[1468]: time="2026-04-14T13:24:21.307422807Z" level=error msg="collecting metrics for 8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455" error="context canceled: unknown" Apr 14 13:24:21.317363 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 13:24:21.318156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:24:21.318373 systemd[1]: kubelet.service: Consumed 1min 52.813s CPU time, 137.2M memory peak, 0B memory swap peak. Apr 14 13:24:21.361533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:24:21.409196 containerd[1468]: time="2026-04-14T13:24:21.408227515Z" level=error msg="ttrpc: received message on inactive stream" stream=1 Apr 14 13:24:21.413112 containerd[1468]: time="2026-04-14T13:24:21.411835403Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 13:24:21.413112 containerd[1468]: time="2026-04-14T13:24:21.412083078Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 14 13:24:21.413112 containerd[1468]: time="2026-04-14T13:24:21.412092232Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 14 13:24:21.413112 containerd[1468]: time="2026-04-14T13:24:21.412100105Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 14 13:24:21.413112 containerd[1468]: time="2026-04-14T13:24:21.412110676Z" level=error msg="ttrpc: received message on inactive stream" stream=11 Apr 14 13:24:21.413112 containerd[1468]: time="2026-04-14T13:24:21.412208923Z" level=error msg="ttrpc: received message on inactive stream" stream=13 Apr 14 13:24:21.413824 containerd[1468]: time="2026-04-14T13:24:21.413666611Z" level=error msg="ttrpc: received message on inactive stream" stream=15 Apr 14 13:24:23.513101 systemd[1]: cri-containerd-8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455.scope: Deactivated successfully. Apr 14 13:24:23.859445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455-rootfs.mount: Deactivated successfully. Apr 14 13:24:23.931733 containerd[1468]: time="2026-04-14T13:24:23.931431064Z" level=info msg="shim disconnected" id=8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455 namespace=k8s.io Apr 14 13:24:23.945810 containerd[1468]: time="2026-04-14T13:24:23.940016416Z" level=warning msg="cleaning up after shim disconnected" id=8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455 namespace=k8s.io Apr 14 13:24:23.945810 containerd[1468]: time="2026-04-14T13:24:23.940408150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:24:24.596912 containerd[1468]: time="2026-04-14T13:24:24.588488873Z" level=error msg="StartContainer for \"8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455\" failed" error="failed to create containerd task: failed to create shim task: context canceled: unknown" Apr 14 13:24:24.863353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:24:24.881727 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:24:26.926140 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:24:26.926140 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:24:26.926140 kubelet[2727]: I0414 13:24:26.926204 2727 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:24:27.041508 kubelet[2727]: I0414 13:24:27.041228 2727 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 14 13:24:27.041508 kubelet[2727]: I0414 13:24:27.041373 2727 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:24:27.041508 kubelet[2727]: I0414 13:24:27.041699 2727 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 14 13:24:27.041508 kubelet[2727]: I0414 13:24:27.041718 2727 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:24:27.072553 kubelet[2727]: I0414 13:24:27.056108 2727 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:24:27.097839 kubelet[2727]: I0414 13:24:27.093440 2727 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 13:24:27.254012 kubelet[2727]: I0414 13:24:27.253558 2727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:24:27.862444 kubelet[2727]: E0414 13:24:27.853734 2727 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:24:27.889090 kubelet[2727]: I0414 13:24:27.870105 2727 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 14 13:24:28.489631 kubelet[2727]: I0414 13:24:28.489216 2727 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 14 13:24:28.546500 kubelet[2727]: I0414 13:24:28.533508 2727 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:24:28.660548 kubelet[2727]: I0414 13:24:28.549858 2727 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 13:24:28.660548 kubelet[2727]: I0414 13:24:28.660370 2727 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:24:28.660548 kubelet[2727]: I0414 13:24:28.660394 2727 container_manager_linux.go:306] "Creating device plugin manager" Apr 14 13:24:28.758345 kubelet[2727]: I0414 13:24:28.671641 2727 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 14 13:24:28.758345 kubelet[2727]: I0414 13:24:28.742112 2727 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:24:28.791439 kubelet[2727]: I0414 13:24:28.788652 2727 kubelet.go:475] "Attempting to sync node with API server" Apr 14 13:24:28.791439 kubelet[2727]: I0414 13:24:28.789002 2727 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:24:28.791439 kubelet[2727]: I0414 13:24:28.789225 2727 kubelet.go:387] "Adding apiserver pod source" Apr 14 13:24:28.791439 kubelet[2727]: I0414 13:24:28.789240 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:24:29.034461 kubelet[2727]: I0414 13:24:29.023836 2727 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:24:29.075009 kubelet[2727]: I0414 13:24:29.073367 2727 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:24:29.075009 kubelet[2727]: I0414 13:24:29.075274 2727 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 14 13:24:29.349840 kubelet[2727]: I0414 13:24:29.291368 2727 server.go:1262] "Started kubelet" Apr 14 13:24:29.375366 kubelet[2727]: I0414 13:24:29.371204 2727 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:24:29.391458 kubelet[2727]: I0414 13:24:29.390182 2727 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 14 13:24:29.391458 kubelet[2727]: I0414 13:24:29.391448 2727 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:24:29.391458 kubelet[2727]: I0414 13:24:29.391522 2727 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:24:29.557931 kubelet[2727]: I0414 13:24:29.556606 2727 server.go:310] "Adding debug handlers to kubelet server" Apr 14 13:24:29.557931 kubelet[2727]: I0414 13:24:29.557378 2727 scope.go:117] "RemoveContainer" containerID="0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3" Apr 14 13:24:29.568441 kubelet[2727]: I0414 13:24:29.565340 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:24:29.712808 kubelet[2727]: I0414 13:24:29.616324 2727 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:24:29.712808 kubelet[2727]: I0414 13:24:29.703793 2727 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 14 13:24:29.712808 kubelet[2727]: I0414 13:24:29.704028 2727 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 14 13:24:29.713926 kubelet[2727]: I0414 13:24:29.713619 2727 reconciler.go:29] "Reconciler: start to sync state" Apr 14 13:24:29.841352 kubelet[2727]: I0414 13:24:29.839895 2727 apiserver.go:52] "Watching apiserver" Apr 14 13:24:29.862731 kubelet[2727]: I0414 13:24:29.860645 2727 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:24:29.874713 kubelet[2727]: I0414 13:24:29.872547 2727 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:24:29.987099 containerd[1468]: time="2026-04-14T13:24:29.973184267Z" level=info msg="RemoveContainer for \"0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3\"" Apr 14 13:24:30.074629 kubelet[2727]: E0414 13:24:29.973954 2727 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:24:30.349414 kubelet[2727]: I0414 13:24:30.345253 2727 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:24:30.494959 containerd[1468]: time="2026-04-14T13:24:30.490321546Z" level=info msg="RemoveContainer for \"0589ff30b97b901b903acb54f205093bcd1a932f9de0178e4cb57937794cfbf3\" returns successfully" Apr 14 13:24:30.771002 kubelet[2727]: I0414 13:24:30.768412 2727 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 14 13:24:30.829475 kubelet[2727]: I0414 13:24:30.829103 2727 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 14 13:24:30.829475 kubelet[2727]: I0414 13:24:30.829226 2727 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 14 13:24:30.829475 kubelet[2727]: I0414 13:24:30.829449 2727 kubelet.go:2428] "Starting kubelet main sync loop" Apr 14 13:24:30.971029 kubelet[2727]: E0414 13:24:30.951257 2727 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:24:31.153371 kubelet[2727]: E0414 13:24:31.124783 2727 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:24:31.510242 kubelet[2727]: E0414 13:24:31.332338 2727 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:24:31.876491 kubelet[2727]: E0414 13:24:31.831321 2727 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:24:32.726371 kubelet[2727]: E0414 13:24:32.725103 2727 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:24:33.475323 kubelet[2727]: I0414 13:24:33.473359 2727 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:24:33.513409 kubelet[2727]: I0414 13:24:33.476269 2727 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:24:33.513409 kubelet[2727]: I0414 13:24:33.476548 2727 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:24:33.513409 kubelet[2727]: I0414 13:24:33.506186 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 13:24:33.513409 kubelet[2727]: I0414 13:24:33.506279 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 13:24:33.513409 kubelet[2727]: I0414 13:24:33.506344 2727 policy_none.go:49] "None policy: Start" Apr 14 13:24:33.513409 kubelet[2727]: I0414 13:24:33.506459 2727 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 14 13:24:33.513409 kubelet[2727]: I0414 13:24:33.506835 2727 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 14 13:24:33.516721 kubelet[2727]: I0414 13:24:33.514095 2727 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 14 13:24:33.516721 kubelet[2727]: I0414 13:24:33.514122 2727 policy_none.go:47] "Start" Apr 14 13:24:33.808321 kubelet[2727]: E0414 13:24:33.805712 2727 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:24:33.808321 kubelet[2727]: I0414 13:24:33.813517 2727 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:24:33.814486 kubelet[2727]: I0414 13:24:33.813654 2727 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:24:33.826286 kubelet[2727]: I0414 13:24:33.825883 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:24:33.839301 kubelet[2727]: E0414 13:24:33.839230 2727 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:24:34.480698 kubelet[2727]: I0414 13:24:34.480243 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/385147077aa7ba6cb9a6805ee8a5b732-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"385147077aa7ba6cb9a6805ee8a5b732\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:24:34.616398 kubelet[2727]: I0414 13:24:34.615511 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/385147077aa7ba6cb9a6805ee8a5b732-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"385147077aa7ba6cb9a6805ee8a5b732\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:24:34.698416 kubelet[2727]: I0414 13:24:34.675844 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/385147077aa7ba6cb9a6805ee8a5b732-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"385147077aa7ba6cb9a6805ee8a5b732\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:24:35.060302 kubelet[2727]: I0414 13:24:35.050340 2727 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:24:35.176471 kubelet[2727]: I0414 13:24:35.175100 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:24:35.182760 kubelet[2727]: I0414 13:24:35.182376 2727 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:24:35.182760 kubelet[2727]: I0414 13:24:35.182664 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:24:35.182760 kubelet[2727]: I0414 13:24:35.182703 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:24:35.182760 kubelet[2727]: I0414 13:24:35.182727 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:24:35.182760 kubelet[2727]: I0414 13:24:35.182752 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc6a32a2019cd173b38de969cf403b25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dc6a32a2019cd173b38de969cf403b25\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:24:35.182760 kubelet[2727]: I0414 13:24:35.182772 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef4c7b0b14aacb703d6788ed41a925d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3ef4c7b0b14aacb703d6788ed41a925d\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:24:35.226359 kubelet[2727]: I0414 13:24:35.225218 2727 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:24:35.531358 kubelet[2727]: I0414 13:24:35.441260 2727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 14 13:24:36.020975 kubelet[2727]: E0414 13:24:36.020890 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:36.256461 kubelet[2727]: E0414 13:24:36.255708 2727 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.074s" Apr 14 13:24:36.264282 kubelet[2727]: E0414 13:24:36.258623 2727 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:24:36.458386 kubelet[2727]: I0414 13:24:36.447464 2727 scope.go:117] "RemoveContainer" containerID="8636f1c20bc91d14ddebdde09b09b16e8a97d1618ff84b7338f5745efc77e455" Apr 14 13:24:36.545636 kubelet[2727]: E0414 13:24:36.540181 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:36.631606 kubelet[2727]: E0414 13:24:36.629511 2727 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 13:24:36.688561 kubelet[2727]: E0414 13:24:36.688125 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:37.184321 kubelet[2727]: I0414 13:24:37.183933 2727 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 13:24:37.290424 kubelet[2727]: I0414 13:24:37.287399 2727 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:24:38.053285 kubelet[2727]: E0414 13:24:38.047859 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:38.448471 kubelet[2727]: E0414 13:24:38.437690 2727 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.457s" Apr 14 13:24:38.892570 kubelet[2727]: E0414 13:24:38.855743 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:38.927462 containerd[1468]: time="2026-04-14T13:24:38.921864449Z" level=info msg="CreateContainer within sandbox \"fc11938dac96ce5e8729823fd16f92c1d4e686f6e42f514f202da7cb60a54e1a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 14 13:24:39.026935 kubelet[2727]: E0414 13:24:38.996128 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:40.684233 containerd[1468]: time="2026-04-14T13:24:40.653887482Z" level=info msg="CreateContainer within sandbox \"fc11938dac96ce5e8729823fd16f92c1d4e686f6e42f514f202da7cb60a54e1a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"fbfd72d5a60270e6e73e4646f1db5bfedd1473d8d7f45c109b89cb43465861dd\"" Apr 14 13:24:41.692914 kubelet[2727]: E0414 13:24:41.684113 2727 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.716s" Apr 14 13:24:42.342347 containerd[1468]: time="2026-04-14T13:24:42.332348095Z" level=info msg="StartContainer for \"fbfd72d5a60270e6e73e4646f1db5bfedd1473d8d7f45c109b89cb43465861dd\"" Apr 14 13:24:43.834677 kubelet[2727]: E0414 13:24:43.832233 2727 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.796s" Apr 14 13:24:47.385234 kubelet[2727]: E0414 13:24:47.384887 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:47.592829 kubelet[2727]: E0414 13:24:47.386213 2727 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.525s" Apr 14 13:24:47.592829 kubelet[2727]: E0414 13:24:47.546686 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:48.938461 systemd[1]: run-containerd-runc-k8s.io-fbfd72d5a60270e6e73e4646f1db5bfedd1473d8d7f45c109b89cb43465861dd-runc.EMs02K.mount: Deactivated successfully. Apr 14 13:24:49.985262 systemd[1]: Started cri-containerd-fbfd72d5a60270e6e73e4646f1db5bfedd1473d8d7f45c109b89cb43465861dd.scope - libcontainer container fbfd72d5a60270e6e73e4646f1db5bfedd1473d8d7f45c109b89cb43465861dd. Apr 14 13:24:51.043485 kubelet[2727]: E0414 13:24:50.963552 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:52.951531 update_engine[1456]: I20260414 13:24:52.929060 1456 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 14 13:24:52.951531 update_engine[1456]: I20260414 13:24:52.947389 1456 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 14 13:24:53.109786 update_engine[1456]: I20260414 13:24:53.109510 1456 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 14 13:24:53.178526 containerd[1468]: time="2026-04-14T13:24:53.121364337Z" level=error msg="get state for fbfd72d5a60270e6e73e4646f1db5bfedd1473d8d7f45c109b89cb43465861dd" error="context deadline exceeded: unknown" Apr 14 13:24:53.712364 containerd[1468]: time="2026-04-14T13:24:53.376701292Z" level=warning msg="unknown status" status=0 Apr 14 13:24:53.716643 update_engine[1456]: I20260414 13:24:53.365422 1456 omaha_request_params.cc:62] Current group set to lts Apr 14 13:24:53.716643 update_engine[1456]: I20260414 13:24:53.515338 1456 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 14 13:24:53.716643 update_engine[1456]: I20260414 13:24:53.521780 1456 update_attempter.cc:643] Scheduling an action processor start. Apr 14 13:24:53.716643 update_engine[1456]: I20260414 13:24:53.523736 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 13:24:53.716643 update_engine[1456]: I20260414 13:24:53.566103 1456 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 14 13:24:53.716643 update_engine[1456]: I20260414 13:24:53.672835 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 13:24:53.716643 update_engine[1456]: I20260414 13:24:53.673107 1456 omaha_request_action.cc:272] Request: Apr 14 13:24:53.716643 update_engine[1456]: Apr 14 13:24:53.716643 update_engine[1456]: Apr 14 13:24:53.716643 update_engine[1456]: Apr 14 13:24:53.716643 update_engine[1456]: Apr 14 13:24:53.716643 update_engine[1456]: Apr 14 13:24:53.716643 update_engine[1456]: Apr 14 13:24:53.716643 update_engine[1456]: Apr 14 13:24:53.716643 update_engine[1456]: Apr 14 13:24:53.716643 update_engine[1456]: I20260414 13:24:53.673120 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:24:54.017145 locksmithd[1485]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 14 13:24:54.029424 kubelet[2727]: E0414 13:24:54.029146 2727 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.499s" Apr 14 13:24:54.657437 update_engine[1456]: I20260414 13:24:54.647957 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:24:54.860786 update_engine[1456]: I20260414 13:24:54.860354 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:24:54.861101 update_engine[1456]: E20260414 13:24:54.860898 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:24:54.861211 update_engine[1456]: I20260414 13:24:54.861112 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 14 13:24:54.957128 containerd[1468]: time="2026-04-14T13:24:54.931831991Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 13:24:55.672448 containerd[1468]: time="2026-04-14T13:24:55.671455896Z" level=info msg="StartContainer for \"fbfd72d5a60270e6e73e4646f1db5bfedd1473d8d7f45c109b89cb43465861dd\" returns successfully" Apr 14 13:24:55.717953 kubelet[2727]: E0414 13:24:55.673419 2727 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.644s" Apr 14 13:24:55.793365 kubelet[2727]: E0414 13:24:55.782496 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:55.938216 kubelet[2727]: E0414 13:24:55.934832 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:56.183740 kubelet[2727]: E0414 13:24:56.183473 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:57.379085 kubelet[2727]: E0414 13:24:57.364309 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:59.147639 kubelet[2727]: E0414 13:24:59.143220 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:04.701846 update_engine[1456]: I20260414 13:25:04.693902 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:25:04.701846 update_engine[1456]: I20260414 13:25:04.701442 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:25:04.701846 update_engine[1456]: I20260414 13:25:04.701953 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:25:04.716071 update_engine[1456]: E20260414 13:25:04.715116 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:25:04.718978 update_engine[1456]: I20260414 13:25:04.717895 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 14 13:25:09.176507 kubelet[2727]: E0414 13:25:09.165004 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:14.754659 update_engine[1456]: I20260414 13:25:14.753669 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:25:14.756056 update_engine[1456]: I20260414 13:25:14.755225 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:25:14.756056 update_engine[1456]: I20260414 13:25:14.755599 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:25:14.770126 update_engine[1456]: E20260414 13:25:14.768361 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:25:14.771191 update_engine[1456]: I20260414 13:25:14.770294 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 14 13:25:21.528801 kubelet[2727]: I0414 13:25:21.528170 2727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 13:25:21.531709 kubelet[2727]: I0414 13:25:21.530451 2727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 13:25:21.531781 containerd[1468]: time="2026-04-14T13:25:21.530213179Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 13:25:22.756981 kubelet[2727]: I0414 13:25:22.756896 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/17c0c471-8570-415e-b456-f0040aa13a52-kube-proxy\") pod \"kube-proxy-hpx85\" (UID: \"17c0c471-8570-415e-b456-f0040aa13a52\") " pod="kube-system/kube-proxy-hpx85" Apr 14 13:25:22.756981 kubelet[2727]: I0414 13:25:22.756974 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17c0c471-8570-415e-b456-f0040aa13a52-lib-modules\") pod \"kube-proxy-hpx85\" (UID: \"17c0c471-8570-415e-b456-f0040aa13a52\") " pod="kube-system/kube-proxy-hpx85" Apr 14 13:25:22.756981 kubelet[2727]: I0414 13:25:22.757000 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbb88\" (UniqueName: \"kubernetes.io/projected/17c0c471-8570-415e-b456-f0040aa13a52-kube-api-access-wbb88\") pod \"kube-proxy-hpx85\" (UID: \"17c0c471-8570-415e-b456-f0040aa13a52\") " pod="kube-system/kube-proxy-hpx85" Apr 14 13:25:22.756981 kubelet[2727]: I0414 13:25:22.757023 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17c0c471-8570-415e-b456-f0040aa13a52-xtables-lock\") pod \"kube-proxy-hpx85\" (UID: \"17c0c471-8570-415e-b456-f0040aa13a52\") " pod="kube-system/kube-proxy-hpx85" Apr 14 13:25:22.764197 systemd[1]: Created slice kubepods-besteffort-pod17c0c471_8570_415e_b456_f0040aa13a52.slice - libcontainer container kubepods-besteffort-pod17c0c471_8570_415e_b456_f0040aa13a52.slice. Apr 14 13:25:23.090717 systemd[1]: Created slice kubepods-besteffort-pode8e16f19_d0f3_431e_ad28_0f8d3d4950f4.slice - libcontainer container kubepods-besteffort-pode8e16f19_d0f3_431e_ad28_0f8d3d4950f4.slice. Apr 14 13:25:23.103363 kubelet[2727]: I0414 13:25:23.102845 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8e16f19-d0f3-431e-ad28-0f8d3d4950f4-var-lib-calico\") pod \"tigera-operator-5588576f44-49dwb\" (UID: \"e8e16f19-d0f3-431e-ad28-0f8d3d4950f4\") " pod="tigera-operator/tigera-operator-5588576f44-49dwb" Apr 14 13:25:23.103363 kubelet[2727]: I0414 13:25:23.102939 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkbrr\" (UniqueName: \"kubernetes.io/projected/e8e16f19-d0f3-431e-ad28-0f8d3d4950f4-kube-api-access-tkbrr\") pod \"tigera-operator-5588576f44-49dwb\" (UID: \"e8e16f19-d0f3-431e-ad28-0f8d3d4950f4\") " pod="tigera-operator/tigera-operator-5588576f44-49dwb" Apr 14 13:25:23.134217 kubelet[2727]: E0414 13:25:23.131729 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:23.139899 containerd[1468]: time="2026-04-14T13:25:23.138816652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hpx85,Uid:17c0c471-8570-415e-b456-f0040aa13a52,Namespace:kube-system,Attempt:0,}" Apr 14 13:25:23.996214 containerd[1468]: time="2026-04-14T13:25:23.963553165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-49dwb,Uid:e8e16f19-d0f3-431e-ad28-0f8d3d4950f4,Namespace:tigera-operator,Attempt:0,}" Apr 14 13:25:24.190196 containerd[1468]: time="2026-04-14T13:25:24.025820354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:25:24.190196 containerd[1468]: time="2026-04-14T13:25:24.046546219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:25:24.190196 containerd[1468]: time="2026-04-14T13:25:24.048736781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:25:24.190196 containerd[1468]: time="2026-04-14T13:25:24.105133291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:25:24.761252 update_engine[1456]: I20260414 13:25:24.746767 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:25:24.761252 update_engine[1456]: I20260414 13:25:24.748099 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:25:24.761252 update_engine[1456]: I20260414 13:25:24.748545 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:25:24.762952 update_engine[1456]: E20260414 13:25:24.762055 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762143 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762177 1456 omaha_request_action.cc:617] Omaha request response: Apr 14 13:25:24.762952 update_engine[1456]: E20260414 13:25:24.762346 1456 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762428 1456 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762434 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762437 1456 update_attempter.cc:306] Processing Done. Apr 14 13:25:24.762952 update_engine[1456]: E20260414 13:25:24.762473 1456 update_attempter.cc:619] Update failed. Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762486 1456 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762491 1456 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762496 1456 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762550 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762610 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 13:25:24.762952 update_engine[1456]: I20260414 13:25:24.762617 1456 omaha_request_action.cc:272] Request: Apr 14 13:25:24.762952 update_engine[1456]: Apr 14 13:25:24.762952 update_engine[1456]: Apr 14 13:25:24.762952 update_engine[1456]: Apr 14 13:25:24.763300 update_engine[1456]: Apr 14 13:25:24.763300 update_engine[1456]: Apr 14 13:25:24.763300 update_engine[1456]: Apr 14 13:25:24.763300 update_engine[1456]: I20260414 13:25:24.762622 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:25:24.763300 update_engine[1456]: I20260414 13:25:24.762775 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:25:24.763300 update_engine[1456]: I20260414 13:25:24.762923 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:25:24.763729 locksmithd[1485]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 14 13:25:24.776316 containerd[1468]: time="2026-04-14T13:25:24.775333347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:25:24.776316 containerd[1468]: time="2026-04-14T13:25:24.775808712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:25:24.776316 containerd[1468]: time="2026-04-14T13:25:24.775820095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:25:24.788097 containerd[1468]: time="2026-04-14T13:25:24.780294600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:25:24.782252 systemd[1]: Started cri-containerd-7001d773daf3d39535aae90e7d80ce9b518d67fa54eaafbe96863205673c5dc8.scope - libcontainer container 7001d773daf3d39535aae90e7d80ce9b518d67fa54eaafbe96863205673c5dc8. Apr 14 13:25:24.788452 locksmithd[1485]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 14 13:25:24.788490 update_engine[1456]: E20260414 13:25:24.773708 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:25:24.788490 update_engine[1456]: I20260414 13:25:24.779330 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 13:25:24.788490 update_engine[1456]: I20260414 13:25:24.779439 1456 omaha_request_action.cc:617] Omaha request response: Apr 14 13:25:24.788490 update_engine[1456]: I20260414 13:25:24.779448 1456 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 13:25:24.788490 update_engine[1456]: I20260414 13:25:24.779451 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 13:25:24.788490 update_engine[1456]: I20260414 13:25:24.779455 1456 update_attempter.cc:306] Processing Done. Apr 14 13:25:24.788490 update_engine[1456]: I20260414 13:25:24.779461 1456 update_attempter.cc:310] Error event sent. Apr 14 13:25:24.788490 update_engine[1456]: I20260414 13:25:24.779520 1456 update_check_scheduler.cc:74] Next update check in 46m23s Apr 14 13:25:24.890054 systemd[1]: Started cri-containerd-b94ee7a2cb030d76e964bc1f978176b25b5b97c7dfbfffad4c11e72effcdf06a.scope - libcontainer container b94ee7a2cb030d76e964bc1f978176b25b5b97c7dfbfffad4c11e72effcdf06a. Apr 14 13:25:25.062958 containerd[1468]: time="2026-04-14T13:25:25.061806139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hpx85,Uid:17c0c471-8570-415e-b456-f0040aa13a52,Namespace:kube-system,Attempt:0,} returns sandbox id \"7001d773daf3d39535aae90e7d80ce9b518d67fa54eaafbe96863205673c5dc8\"" Apr 14 13:25:25.080718 kubelet[2727]: E0414 13:25:25.080216 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:25.244148 containerd[1468]: time="2026-04-14T13:25:25.243903809Z" level=info msg="CreateContainer within sandbox \"7001d773daf3d39535aae90e7d80ce9b518d67fa54eaafbe96863205673c5dc8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 13:25:25.443033 containerd[1468]: time="2026-04-14T13:25:25.436172291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-49dwb,Uid:e8e16f19-d0f3-431e-ad28-0f8d3d4950f4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b94ee7a2cb030d76e964bc1f978176b25b5b97c7dfbfffad4c11e72effcdf06a\"" Apr 14 13:25:25.597902 containerd[1468]: time="2026-04-14T13:25:25.597568794Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 14 13:25:25.660855 containerd[1468]: time="2026-04-14T13:25:25.660565788Z" level=info msg="CreateContainer within sandbox \"7001d773daf3d39535aae90e7d80ce9b518d67fa54eaafbe96863205673c5dc8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb8a288c4a424c7660156c2767760277cc548815b59642a9af4d4da3a045e90c\"" Apr 14 13:25:25.686023 containerd[1468]: time="2026-04-14T13:25:25.685825077Z" level=info msg="StartContainer for \"eb8a288c4a424c7660156c2767760277cc548815b59642a9af4d4da3a045e90c\"" Apr 14 13:25:25.996396 systemd[1]: Started cri-containerd-eb8a288c4a424c7660156c2767760277cc548815b59642a9af4d4da3a045e90c.scope - libcontainer container eb8a288c4a424c7660156c2767760277cc548815b59642a9af4d4da3a045e90c. Apr 14 13:25:26.249882 containerd[1468]: time="2026-04-14T13:25:26.246219484Z" level=info msg="StartContainer for \"eb8a288c4a424c7660156c2767760277cc548815b59642a9af4d4da3a045e90c\" returns successfully" Apr 14 13:25:27.550328 kubelet[2727]: E0414 13:25:27.544246 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:28.586956 kubelet[2727]: E0414 13:25:28.585533 2727 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:29.182216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425381435.mount: Deactivated successfully. Apr 14 13:25:31.138349 kubelet[2727]: I0414 13:25:31.136151 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hpx85" podStartSLOduration=9.13585685 podStartE2EDuration="9.13585685s" podCreationTimestamp="2026-04-14 13:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:25:27.749610216 +0000 UTC m=+62.794769678" watchObservedRunningTime="2026-04-14 13:25:31.13585685 +0000 UTC m=+66.181016324" Apr 14 13:25:36.763248 containerd[1468]: time="2026-04-14T13:25:36.760071254Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:25:36.775809 containerd[1468]: time="2026-04-14T13:25:36.771757298Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 14 13:25:36.925904 containerd[1468]: time="2026-04-14T13:25:36.923212469Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:25:36.959691 containerd[1468]: time="2026-04-14T13:25:36.959234522Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:25:36.973475 containerd[1468]: time="2026-04-14T13:25:36.972425742Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 11.374603117s" Apr 14 13:25:36.973475 containerd[1468]: time="2026-04-14T13:25:36.972717280Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 14 13:25:37.581357 containerd[1468]: time="2026-04-14T13:25:37.580975218Z" level=info msg="CreateContainer within sandbox \"b94ee7a2cb030d76e964bc1f978176b25b5b97c7dfbfffad4c11e72effcdf06a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 14 13:25:37.833234 containerd[1468]: time="2026-04-14T13:25:37.832519583Z" level=info msg="CreateContainer within sandbox \"b94ee7a2cb030d76e964bc1f978176b25b5b97c7dfbfffad4c11e72effcdf06a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f281e4177345711a2944af77a90f065a715d99fe9d75224670e07a4abbf1601c\"" Apr 14 13:25:37.869990 containerd[1468]: time="2026-04-14T13:25:37.863494440Z" level=info msg="StartContainer for \"f281e4177345711a2944af77a90f065a715d99fe9d75224670e07a4abbf1601c\"" Apr 14 13:25:38.409900 systemd[1]: Started cri-containerd-f281e4177345711a2944af77a90f065a715d99fe9d75224670e07a4abbf1601c.scope - libcontainer container f281e4177345711a2944af77a90f065a715d99fe9d75224670e07a4abbf1601c. Apr 14 13:25:39.169009 containerd[1468]: time="2026-04-14T13:25:39.166163857Z" level=info msg="StartContainer for \"f281e4177345711a2944af77a90f065a715d99fe9d75224670e07a4abbf1601c\" returns successfully" Apr 14 13:25:40.251509 kubelet[2727]: I0414 13:25:40.250900 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-49dwb" podStartSLOduration=6.771956088 podStartE2EDuration="18.248338375s" podCreationTimestamp="2026-04-14 13:25:22 +0000 UTC" firstStartedPulling="2026-04-14 13:25:25.572652445 +0000 UTC m=+60.617811916" lastFinishedPulling="2026-04-14 13:25:37.049034745 +0000 UTC m=+72.094194203" observedRunningTime="2026-04-14 13:25:40.244512163 +0000 UTC m=+75.289671640" watchObservedRunningTime="2026-04-14 13:25:40.248338375 +0000 UTC m=+75.293497839" Apr 14 13:26:00.632295 sudo[1636]: pam_unix(sudo:session): session closed for user root Apr 14 13:26:00.775866 sshd[1633]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:01.109113 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:43524.service: Deactivated successfully. Apr 14 13:26:01.290410 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 13:26:01.291056 systemd[1]: session-7.scope: Consumed 1min 19.230s CPU time, 163.1M memory peak, 0B memory swap peak. Apr 14 13:26:01.333445 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Apr 14 13:26:01.519153 systemd-logind[1454]: Removed session 7.