Apr 16 03:13:03.297991 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 03:13:03.298023 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 03:13:03.298036 kernel: BIOS-provided physical RAM map: Apr 16 03:13:03.298044 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 03:13:03.298051 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 03:13:03.298059 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 03:13:03.298082 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 03:13:03.298090 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 03:13:03.298096 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 03:13:03.298105 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 03:13:03.298111 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 03:13:03.298117 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 03:13:03.298124 kernel: NX (Execute Disable) protection: active Apr 16 03:13:03.298130 kernel: APIC: Static calls initialized Apr 16 03:13:03.298138 kernel: SMBIOS 2.8 present. Apr 16 03:13:03.298147 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 03:13:03.298153 kernel: Hypervisor detected: KVM Apr 16 03:13:03.298160 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 03:13:03.298167 kernel: kvm-clock: using sched offset of 6576808596 cycles Apr 16 03:13:03.298174 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 03:13:03.298181 kernel: tsc: Detected 2793.438 MHz processor Apr 16 03:13:03.298188 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 03:13:03.298195 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 03:13:03.298202 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 03:13:03.298211 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 03:13:03.298219 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 03:13:03.298225 kernel: Using GB pages for direct mapping Apr 16 03:13:03.298232 kernel: ACPI: Early table checksum verification disabled Apr 16 03:13:03.298239 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 03:13:03.298246 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:13:03.298253 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:13:03.298259 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:13:03.298266 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 03:13:03.298275 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:13:03.298281 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:13:03.298288 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:13:03.298295 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:13:03.298302 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 03:13:03.298308 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 03:13:03.298315 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 03:13:03.298326 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 03:13:03.298335 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 03:13:03.298342 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 03:13:03.298350 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 03:13:03.298357 kernel: No NUMA configuration found Apr 16 03:13:03.298364 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 03:13:03.298372 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 16 03:13:03.298381 kernel: Zone ranges: Apr 16 03:13:03.298388 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 03:13:03.298395 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 03:13:03.298402 kernel: Normal empty Apr 16 03:13:03.298410 kernel: Movable zone start for each node Apr 16 03:13:03.298417 kernel: Early memory node ranges Apr 16 03:13:03.298424 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 03:13:03.298431 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 03:13:03.298438 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 03:13:03.298446 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 03:13:03.298455 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 03:13:03.298462 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 03:13:03.298469 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 03:13:03.298476 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 03:13:03.298484 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 03:13:03.298491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 03:13:03.298498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 03:13:03.298505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 03:13:03.298512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 03:13:03.298521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 03:13:03.298529 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 03:13:03.298536 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 03:13:03.298543 kernel: TSC deadline timer available Apr 16 03:13:03.298550 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 16 03:13:03.298558 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 03:13:03.298569 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 03:13:03.298576 kernel: kvm-guest: setup PV sched yield Apr 16 03:13:03.298583 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 03:13:03.298592 kernel: Booting paravirtualized kernel on KVM Apr 16 03:13:03.298600 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 03:13:03.298607 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 03:13:03.298614 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 16 03:13:03.298621 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 16 03:13:03.298628 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 03:13:03.298635 kernel: kvm-guest: PV spinlocks enabled Apr 16 03:13:03.298643 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 03:13:03.298651 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 03:13:03.298660 kernel: random: crng init done Apr 16 03:13:03.298667 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 03:13:03.298675 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 03:13:03.299127 kernel: Fallback order for Node 0: 0 Apr 16 03:13:03.299140 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 16 03:13:03.299148 kernel: Policy zone: DMA32 Apr 16 03:13:03.299155 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 03:13:03.299163 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 16 03:13:03.299174 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 03:13:03.299181 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 03:13:03.299188 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 03:13:03.299195 kernel: Dynamic Preempt: voluntary Apr 16 03:13:03.299202 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 03:13:03.299211 kernel: rcu: RCU event tracing is enabled. Apr 16 03:13:03.299218 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 03:13:03.299226 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 03:13:03.299233 kernel: Rude variant of Tasks RCU enabled. Apr 16 03:13:03.299242 kernel: Tracing variant of Tasks RCU enabled. Apr 16 03:13:03.299250 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 03:13:03.299257 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 03:13:03.299264 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 03:13:03.299272 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 03:13:03.299279 kernel: Console: colour VGA+ 80x25 Apr 16 03:13:03.299286 kernel: printk: console [ttyS0] enabled Apr 16 03:13:03.299294 kernel: ACPI: Core revision 20230628 Apr 16 03:13:03.299302 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 03:13:03.299312 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 03:13:03.299320 kernel: x2apic enabled Apr 16 03:13:03.299327 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 03:13:03.299335 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 03:13:03.299342 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 03:13:03.299349 kernel: kvm-guest: setup PV IPIs Apr 16 03:13:03.299357 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 03:13:03.299364 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 03:13:03.299381 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 03:13:03.299389 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 03:13:03.299397 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 03:13:03.299405 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 03:13:03.299415 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 03:13:03.299422 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 03:13:03.299430 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 03:13:03.299438 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 03:13:03.299448 kernel: RETBleed: Vulnerable Apr 16 03:13:03.299456 kernel: Speculative Store Bypass: Vulnerable Apr 16 03:13:03.299463 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 03:13:03.299471 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 03:13:03.299480 kernel: active return thunk: its_return_thunk Apr 16 03:13:03.299488 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 03:13:03.299496 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 03:13:03.299504 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 03:13:03.299512 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 03:13:03.299522 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 03:13:03.299530 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 03:13:03.299538 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 03:13:03.299546 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 03:13:03.299554 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 03:13:03.299561 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 03:13:03.299569 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 03:13:03.299577 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 03:13:03.299587 kernel: Freeing SMP alternatives memory: 32K Apr 16 03:13:03.299609 kernel: pid_max: default: 32768 minimum: 301 Apr 16 03:13:03.299618 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 03:13:03.299626 kernel: landlock: Up and running. Apr 16 03:13:03.299634 kernel: SELinux: Initializing. Apr 16 03:13:03.299642 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 03:13:03.299650 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 03:13:03.299658 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 03:13:03.299666 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 03:13:03.299673 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 03:13:03.299897 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 03:13:03.299908 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 03:13:03.299917 kernel: signal: max sigframe size: 3632 Apr 16 03:13:03.299928 kernel: rcu: Hierarchical SRCU implementation. Apr 16 03:13:03.299938 kernel: rcu: Max phase no-delay instances is 400. Apr 16 03:13:03.299947 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 03:13:03.300090 kernel: smp: Bringing up secondary CPUs ... Apr 16 03:13:03.300099 kernel: smpboot: x86: Booting SMP configuration: Apr 16 03:13:03.300107 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 03:13:03.300118 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 03:13:03.300126 kernel: smpboot: Max logical packages: 1 Apr 16 03:13:03.300134 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 03:13:03.300142 kernel: devtmpfs: initialized Apr 16 03:13:03.300150 kernel: x86/mm: Memory block size: 128MB Apr 16 03:13:03.300158 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 03:13:03.300166 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 03:13:03.300175 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 03:13:03.300183 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 03:13:03.300193 kernel: audit: initializing netlink subsys (disabled) Apr 16 03:13:03.300201 kernel: audit: type=2000 audit(1776309180.875:1): state=initialized audit_enabled=0 res=1 Apr 16 03:13:03.300209 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 03:13:03.300217 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 03:13:03.300225 kernel: cpuidle: using governor menu Apr 16 03:13:03.300233 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 03:13:03.300241 kernel: dca service started, version 1.12.1 Apr 16 03:13:03.300249 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 03:13:03.300257 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 03:13:03.300267 kernel: PCI: Using configuration type 1 for base access Apr 16 03:13:03.300276 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 03:13:03.300286 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 03:13:03.300296 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 03:13:03.300307 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 03:13:03.300318 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 03:13:03.300329 kernel: ACPI: Added _OSI(Module Device) Apr 16 03:13:03.300340 kernel: ACPI: Added _OSI(Processor Device) Apr 16 03:13:03.300350 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 03:13:03.300363 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 03:13:03.300374 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 03:13:03.300385 kernel: ACPI: Interpreter enabled Apr 16 03:13:03.300396 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 03:13:03.300407 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 03:13:03.300418 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 03:13:03.300429 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 03:13:03.300439 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 03:13:03.300450 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 03:13:03.301071 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 03:13:03.301209 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 03:13:03.301304 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 03:13:03.301318 kernel: PCI host bridge to bus 0000:00 Apr 16 03:13:03.301412 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 03:13:03.301494 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 03:13:03.301579 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 03:13:03.301659 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 03:13:03.301776 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 03:13:03.301876 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 03:13:03.301956 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 03:13:03.302061 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 03:13:03.302162 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 16 03:13:03.302260 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 16 03:13:03.302351 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 16 03:13:03.302441 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 16 03:13:03.302530 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 03:13:03.302629 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 16 03:13:03.302771 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 16 03:13:03.303327 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 16 03:13:03.303432 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 03:13:03.303533 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 16 03:13:03.303626 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 16 03:13:03.303757 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 16 03:13:03.303877 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 03:13:03.303985 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 16 03:13:03.304081 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 16 03:13:03.304174 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 16 03:13:03.304266 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 03:13:03.304356 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 16 03:13:03.304454 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 03:13:03.304545 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 03:13:03.304643 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 03:13:03.304813 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 16 03:13:03.304912 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 16 03:13:03.305013 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 03:13:03.305104 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 16 03:13:03.305118 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 03:13:03.305129 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 03:13:03.305140 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 03:13:03.305150 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 03:13:03.305165 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 03:13:03.305176 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 03:13:03.305186 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 03:13:03.305197 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 03:13:03.305208 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 03:13:03.305219 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 03:13:03.305230 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 03:13:03.305240 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 03:13:03.305251 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 03:13:03.305263 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 03:13:03.305274 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 03:13:03.305285 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 03:13:03.305295 kernel: iommu: Default domain type: Translated Apr 16 03:13:03.305306 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 03:13:03.305317 kernel: PCI: Using ACPI for IRQ routing Apr 16 03:13:03.305327 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 03:13:03.305338 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 03:13:03.305349 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 03:13:03.305443 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 03:13:03.305534 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 03:13:03.305624 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 03:13:03.305637 kernel: vgaarb: loaded Apr 16 03:13:03.305649 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 03:13:03.305660 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 03:13:03.305671 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 03:13:03.305710 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 03:13:03.305724 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 03:13:03.305746 kernel: pnp: PnP ACPI init Apr 16 03:13:03.305885 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 03:13:03.305902 kernel: pnp: PnP ACPI: found 6 devices Apr 16 03:13:03.305913 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 03:13:03.305923 kernel: NET: Registered PF_INET protocol family Apr 16 03:13:03.305931 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 03:13:03.305940 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 03:13:03.305952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 03:13:03.305962 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 03:13:03.305972 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 03:13:03.305982 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 03:13:03.305993 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 03:13:03.306004 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 03:13:03.306015 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 03:13:03.306025 kernel: NET: Registered PF_XDP protocol family Apr 16 03:13:03.306120 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 03:13:03.306208 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 03:13:03.306291 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 03:13:03.306375 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 03:13:03.306455 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 03:13:03.306521 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 03:13:03.306532 kernel: PCI: CLS 0 bytes, default 64 Apr 16 03:13:03.306540 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 03:13:03.306549 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 03:13:03.306561 kernel: Initialise system trusted keyrings Apr 16 03:13:03.306569 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 03:13:03.306577 kernel: Key type asymmetric registered Apr 16 03:13:03.306585 kernel: Asymmetric key parser 'x509' registered Apr 16 03:13:03.306593 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 03:13:03.306602 kernel: io scheduler mq-deadline registered Apr 16 03:13:03.306609 kernel: io scheduler kyber registered Apr 16 03:13:03.306617 kernel: io scheduler bfq registered Apr 16 03:13:03.306625 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 03:13:03.306637 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 03:13:03.306645 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 03:13:03.306653 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 03:13:03.306661 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 03:13:03.306671 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 03:13:03.307215 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 03:13:03.307246 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 03:13:03.307256 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 03:13:03.307395 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 03:13:03.307416 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 03:13:03.307498 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 03:13:03.307580 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T03:13:02 UTC (1776309182) Apr 16 03:13:03.307661 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 03:13:03.307674 kernel: intel_pstate: CPU model not supported Apr 16 03:13:03.307839 kernel: NET: Registered PF_INET6 protocol family Apr 16 03:13:03.307851 kernel: Segment Routing with IPv6 Apr 16 03:13:03.307862 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 03:13:03.307879 kernel: NET: Registered PF_PACKET protocol family Apr 16 03:13:03.307890 kernel: Key type dns_resolver registered Apr 16 03:13:03.307901 kernel: IPI shorthand broadcast: enabled Apr 16 03:13:03.307911 kernel: sched_clock: Marking stable (1468018308, 409988439)->(2122555395, -244548648) Apr 16 03:13:03.307922 kernel: registered taskstats version 1 Apr 16 03:13:03.307933 kernel: Loading compiled-in X.509 certificates Apr 16 03:13:03.307944 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 03:13:03.307955 kernel: Key type .fscrypt registered Apr 16 03:13:03.307965 kernel: Key type fscrypt-provisioning registered Apr 16 03:13:03.307978 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 03:13:03.307990 kernel: ima: Allocated hash algorithm: sha1 Apr 16 03:13:03.308000 kernel: ima: No architecture policies found Apr 16 03:13:03.308011 kernel: clk: Disabling unused clocks Apr 16 03:13:03.308022 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 03:13:03.308033 kernel: Write protecting the kernel read-only data: 36864k Apr 16 03:13:03.308044 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 03:13:03.308055 kernel: Run /init as init process Apr 16 03:13:03.308066 kernel: with arguments: Apr 16 03:13:03.308077 kernel: /init Apr 16 03:13:03.308089 kernel: with environment: Apr 16 03:13:03.308100 kernel: HOME=/ Apr 16 03:13:03.308111 kernel: TERM=linux Apr 16 03:13:03.308124 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 03:13:03.308139 systemd[1]: Detected virtualization kvm. Apr 16 03:13:03.308150 systemd[1]: Detected architecture x86-64. Apr 16 03:13:03.308162 systemd[1]: Running in initrd. Apr 16 03:13:03.308174 systemd[1]: No hostname configured, using default hostname. Apr 16 03:13:03.308185 systemd[1]: Hostname set to . Apr 16 03:13:03.308197 systemd[1]: Initializing machine ID from VM UUID. Apr 16 03:13:03.308208 systemd[1]: Queued start job for default target initrd.target. Apr 16 03:13:03.308220 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 03:13:03.308231 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 03:13:03.308245 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 03:13:03.308256 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 03:13:03.308269 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 03:13:03.308281 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 03:13:03.308309 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 03:13:03.308321 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 03:13:03.308332 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 03:13:03.308347 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 03:13:03.308359 systemd[1]: Reached target paths.target - Path Units. Apr 16 03:13:03.308371 systemd[1]: Reached target slices.target - Slice Units. Apr 16 03:13:03.308383 systemd[1]: Reached target swap.target - Swaps. Apr 16 03:13:03.308394 systemd[1]: Reached target timers.target - Timer Units. Apr 16 03:13:03.308406 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 03:13:03.308417 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 03:13:03.308430 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 03:13:03.308443 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 03:13:03.308456 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 03:13:03.308469 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 03:13:03.308481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 03:13:03.308492 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 03:13:03.308504 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 03:13:03.308516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 03:13:03.308528 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 03:13:03.308540 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 03:13:03.308557 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 03:13:03.308568 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 03:13:03.308581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 03:13:03.308593 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 03:13:03.308605 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 03:13:03.308617 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 03:13:03.308660 systemd-journald[195]: Collecting audit messages is disabled. Apr 16 03:13:03.308721 systemd-journald[195]: Journal started Apr 16 03:13:03.308749 systemd-journald[195]: Runtime Journal (/run/log/journal/d70ef9660dc449dc86c013b7f9cf2b23) is 6.0M, max 48.4M, 42.3M free. Apr 16 03:13:03.311091 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 03:13:03.294307 systemd-modules-load[196]: Inserted module 'overlay' Apr 16 03:13:03.441908 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 03:13:03.441940 kernel: Bridge firewalling registered Apr 16 03:13:03.343288 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 16 03:13:03.448972 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 03:13:03.465151 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 03:13:03.469286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:13:03.479233 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 03:13:03.505406 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 03:13:03.516352 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 03:13:03.523414 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 03:13:03.533081 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 03:13:03.554471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 03:13:03.555637 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 03:13:03.561286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 03:13:03.628609 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 03:13:03.635549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 03:13:03.645774 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 03:13:03.682877 dracut-cmdline[234]: dracut-dracut-053 Apr 16 03:13:03.685809 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 03:13:03.699164 systemd-resolved[232]: Positive Trust Anchors: Apr 16 03:13:03.699182 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 03:13:03.699222 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 03:13:03.703096 systemd-resolved[232]: Defaulting to hostname 'linux'. Apr 16 03:13:03.704972 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 03:13:03.707675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 03:13:03.901317 kernel: SCSI subsystem initialized Apr 16 03:13:03.941443 kernel: Loading iSCSI transport class v2.0-870. Apr 16 03:13:03.982620 kernel: iscsi: registered transport (tcp) Apr 16 03:13:04.023078 kernel: iscsi: registered transport (qla4xxx) Apr 16 03:13:04.023469 kernel: QLogic iSCSI HBA Driver Apr 16 03:13:04.161391 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 03:13:04.177245 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 03:13:04.276300 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 03:13:04.276507 kernel: device-mapper: uevent: version 1.0.3 Apr 16 03:13:04.278110 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 03:13:04.368050 kernel: raid6: avx512x4 gen() 32961 MB/s Apr 16 03:13:04.386047 kernel: raid6: avx512x2 gen() 30604 MB/s Apr 16 03:13:04.404267 kernel: raid6: avx512x1 gen() 27761 MB/s Apr 16 03:13:04.421978 kernel: raid6: avx2x4 gen() 19608 MB/s Apr 16 03:13:04.440142 kernel: raid6: avx2x2 gen() 20206 MB/s Apr 16 03:13:04.459198 kernel: raid6: avx2x1 gen() 13671 MB/s Apr 16 03:13:04.459393 kernel: raid6: using algorithm avx512x4 gen() 32961 MB/s Apr 16 03:13:04.479119 kernel: raid6: .... xor() 7926 MB/s, rmw enabled Apr 16 03:13:04.479370 kernel: raid6: using avx512x2 recovery algorithm Apr 16 03:13:04.516013 kernel: xor: automatically using best checksumming function avx Apr 16 03:13:04.886022 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 03:13:04.916196 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 03:13:04.936450 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 03:13:04.960993 systemd-udevd[417]: Using default interface naming scheme 'v255'. Apr 16 03:13:04.969453 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 03:13:04.985940 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 03:13:05.044108 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Apr 16 03:13:05.125406 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 03:13:05.165121 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 03:13:05.227972 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 03:13:05.251482 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 03:13:05.281063 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 03:13:05.295339 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 03:13:05.302835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 03:13:05.308718 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 03:13:05.324729 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 03:13:05.323114 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 03:13:05.334856 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 03:13:05.337662 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 03:13:05.345101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 03:13:05.349891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 03:13:05.359445 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 03:13:05.368431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 03:13:05.385001 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 03:13:05.385090 kernel: GPT:9289727 != 19775487 Apr 16 03:13:05.385136 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 03:13:05.385153 kernel: GPT:9289727 != 19775487 Apr 16 03:13:05.385167 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 03:13:05.385181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 03:13:05.368659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:13:05.384987 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 03:13:05.405942 kernel: AVX2 version of gcm_enc/dec engaged. Apr 16 03:13:05.406026 kernel: AES CTR mode by8 optimization enabled Apr 16 03:13:05.407859 kernel: libata version 3.00 loaded. Apr 16 03:13:05.411835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 03:13:05.420591 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 03:13:05.440911 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 03:13:05.450179 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 03:13:05.457793 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 03:13:05.458241 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 03:13:05.460669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 03:13:05.685678 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (474) Apr 16 03:13:05.685759 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Apr 16 03:13:05.685791 kernel: scsi host0: ahci Apr 16 03:13:05.687122 kernel: scsi host1: ahci Apr 16 03:13:05.687242 kernel: scsi host2: ahci Apr 16 03:13:05.687646 kernel: scsi host3: ahci Apr 16 03:13:05.687882 kernel: scsi host4: ahci Apr 16 03:13:05.688001 kernel: scsi host5: ahci Apr 16 03:13:05.688128 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 16 03:13:05.688143 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 16 03:13:05.688157 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 16 03:13:05.688171 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 16 03:13:05.688184 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 16 03:13:05.688198 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 16 03:13:05.694020 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 03:13:05.703330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:13:05.718456 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 03:13:05.724593 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 03:13:05.726904 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 03:13:05.749847 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 03:13:05.756990 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 03:13:05.772612 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 03:13:05.772648 disk-uuid[565]: Primary Header is updated. Apr 16 03:13:05.772648 disk-uuid[565]: Secondary Entries is updated. Apr 16 03:13:05.772648 disk-uuid[565]: Secondary Header is updated. Apr 16 03:13:05.795005 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 03:13:05.795066 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 03:13:05.796156 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 03:13:05.796729 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 03:13:05.799513 kernel: ata3.00: applying bridge limits Apr 16 03:13:05.801773 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 03:13:05.804744 kernel: ata3.00: configured for UDMA/100 Apr 16 03:13:05.805138 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 03:13:05.812740 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 03:13:05.814759 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 03:13:05.815671 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 03:13:05.886016 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 03:13:05.886433 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 03:13:05.903745 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 03:13:06.798759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 03:13:06.800189 disk-uuid[567]: The operation has completed successfully. Apr 16 03:13:06.907336 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 03:13:06.907495 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 03:13:06.925022 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 03:13:06.933784 sh[594]: Success Apr 16 03:13:06.991725 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 16 03:13:07.079255 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 03:13:07.087511 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 03:13:07.107889 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 03:13:07.129522 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 03:13:07.129758 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 03:13:07.133088 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 03:13:07.136029 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 03:13:07.138612 kernel: BTRFS info (device dm-0): using free space tree Apr 16 03:13:07.155054 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 03:13:07.161336 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 03:13:07.175488 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 03:13:07.187064 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 03:13:07.198792 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 03:13:07.198868 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 03:13:07.201333 kernel: BTRFS info (device vda6): using free space tree Apr 16 03:13:07.221550 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 03:13:07.237197 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 03:13:07.243094 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 03:13:07.264041 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 03:13:07.276302 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 03:13:07.429455 ignition[682]: Ignition 2.19.0 Apr 16 03:13:07.429478 ignition[682]: Stage: fetch-offline Apr 16 03:13:07.429516 ignition[682]: no configs at "/usr/lib/ignition/base.d" Apr 16 03:13:07.429526 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:13:07.429653 ignition[682]: parsed url from cmdline: "" Apr 16 03:13:07.429656 ignition[682]: no config URL provided Apr 16 03:13:07.429663 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 03:13:07.429671 ignition[682]: no config at "/usr/lib/ignition/user.ign" Apr 16 03:13:07.429735 ignition[682]: op(1): [started] loading QEMU firmware config module Apr 16 03:13:07.429741 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 03:13:07.483184 ignition[682]: op(1): [finished] loading QEMU firmware config module Apr 16 03:13:07.508180 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 03:13:07.535596 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 03:13:07.588722 systemd-networkd[782]: lo: Link UP Apr 16 03:13:07.588753 systemd-networkd[782]: lo: Gained carrier Apr 16 03:13:07.590172 systemd-networkd[782]: Enumeration completed Apr 16 03:13:07.590568 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 03:13:07.591550 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 03:13:07.591556 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 03:13:07.600436 systemd-networkd[782]: eth0: Link UP Apr 16 03:13:07.600441 systemd-networkd[782]: eth0: Gained carrier Apr 16 03:13:07.600455 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 03:13:07.631535 systemd[1]: Reached target network.target - Network. Apr 16 03:13:07.665152 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 03:13:07.723958 ignition[682]: parsing config with SHA512: 3681cb862ea61907413db844790b16fd7d9d0532322bec81c514f213a9420d252b0bae17d320d0c7a9979bb6e81a420cb430ded7225f357f9ce04b85562b12da Apr 16 03:13:07.746247 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.7 Apr 16 03:13:07.746274 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Apr 16 03:13:07.748662 unknown[682]: fetched base config from "system" Apr 16 03:13:07.748673 unknown[682]: fetched user config from "qemu" Apr 16 03:13:07.753883 ignition[682]: fetch-offline: fetch-offline passed Apr 16 03:13:07.754169 ignition[682]: Ignition finished successfully Apr 16 03:13:07.766055 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 03:13:07.768927 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 03:13:07.787509 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 03:13:07.838796 ignition[786]: Ignition 2.19.0 Apr 16 03:13:07.838835 ignition[786]: Stage: kargs Apr 16 03:13:07.839052 ignition[786]: no configs at "/usr/lib/ignition/base.d" Apr 16 03:13:07.839062 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:13:07.840314 ignition[786]: kargs: kargs passed Apr 16 03:13:07.840377 ignition[786]: Ignition finished successfully Apr 16 03:13:07.865001 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 03:13:07.881317 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 03:13:07.937196 ignition[794]: Ignition 2.19.0 Apr 16 03:13:07.937226 ignition[794]: Stage: disks Apr 16 03:13:07.937547 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 16 03:13:07.951249 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 03:13:07.937563 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:13:07.940042 ignition[794]: disks: disks passed Apr 16 03:13:07.940107 ignition[794]: Ignition finished successfully Apr 16 03:13:07.981098 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 03:13:07.983358 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 03:13:07.994627 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 03:13:07.999962 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 03:13:08.013446 systemd[1]: Reached target basic.target - Basic System. Apr 16 03:13:08.043586 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 03:13:08.074434 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 16 03:13:08.111380 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 03:13:08.143452 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 03:13:08.423197 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 03:13:08.423778 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 03:13:08.429907 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 03:13:08.444899 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 03:13:08.451131 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 03:13:08.454915 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 03:13:08.454974 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 03:13:08.455280 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 03:13:08.470996 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 03:13:08.505638 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 03:13:08.528265 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Apr 16 03:13:08.537555 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 03:13:08.537626 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 03:13:08.541224 kernel: BTRFS info (device vda6): using free space tree Apr 16 03:13:08.555597 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 03:13:08.560494 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 03:13:08.661711 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 03:13:08.702013 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Apr 16 03:13:08.738796 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 03:13:08.762603 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 03:13:09.070475 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 03:13:09.120375 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 03:13:09.139194 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 03:13:09.157513 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 03:13:09.166068 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 03:13:09.279575 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 03:13:09.294979 ignition[924]: INFO : Ignition 2.19.0 Apr 16 03:13:09.298641 ignition[924]: INFO : Stage: mount Apr 16 03:13:09.298641 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 03:13:09.298641 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:13:09.316022 ignition[924]: INFO : mount: mount passed Apr 16 03:13:09.316022 ignition[924]: INFO : Ignition finished successfully Apr 16 03:13:09.322450 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 03:13:09.346653 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 03:13:09.453997 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 03:13:09.486889 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Apr 16 03:13:09.495102 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 03:13:09.495285 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 03:13:09.495300 kernel: BTRFS info (device vda6): using free space tree Apr 16 03:13:09.524155 systemd-networkd[782]: eth0: Gained IPv6LL Apr 16 03:13:09.531199 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 03:13:09.532960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 03:13:09.640950 ignition[954]: INFO : Ignition 2.19.0 Apr 16 03:13:09.640950 ignition[954]: INFO : Stage: files Apr 16 03:13:09.640950 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 03:13:09.640950 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:13:09.640950 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Apr 16 03:13:09.677813 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 03:13:09.677813 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 03:13:09.693137 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 03:13:09.702541 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 03:13:09.722275 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 03:13:09.720165 unknown[954]: wrote ssh authorized keys file for user: core Apr 16 03:13:09.732785 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 16 03:13:09.738841 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 16 03:13:09.738841 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 03:13:09.738841 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 03:13:09.841644 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 16 03:13:10.336088 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 03:13:10.349528 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 16 03:13:10.349528 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 03:13:10.349528 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 03:13:10.349528 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 03:13:10.349528 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 03:13:10.349528 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 03:13:10.349528 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 03:13:10.349528 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 03:13:10.400674 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 03:13:10.400674 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 03:13:10.400674 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 03:13:10.400674 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 03:13:10.400674 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 03:13:10.400674 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 16 03:13:10.789592 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 16 03:13:12.568920 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 03:13:12.568920 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 16 03:13:12.592178 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 16 03:13:12.613884 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 16 03:13:12.613884 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 16 03:13:12.613884 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 16 03:13:12.613884 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 03:13:12.613884 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 03:13:12.613884 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 16 03:13:12.613884 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 16 03:13:12.707326 ignition[954]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 03:13:12.707326 ignition[954]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 03:13:12.707326 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 16 03:13:12.707326 ignition[954]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 03:13:12.788902 ignition[954]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 03:13:12.814743 ignition[954]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 03:13:12.836029 ignition[954]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 03:13:12.836029 ignition[954]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 16 03:13:12.836029 ignition[954]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 03:13:12.836029 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 03:13:12.836029 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 03:13:12.836029 ignition[954]: INFO : files: files passed Apr 16 03:13:12.836029 ignition[954]: INFO : Ignition finished successfully Apr 16 03:13:12.896042 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 03:13:12.937206 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 03:13:12.945584 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 03:13:12.972729 initrd-setup-root-after-ignition[980]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 03:13:12.989482 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 03:13:12.989642 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 03:13:13.011116 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 03:13:13.011116 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 03:13:13.019608 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 03:13:13.026853 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 03:13:13.030365 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 03:13:13.077800 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 03:13:13.171079 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 03:13:13.171223 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 03:13:13.183508 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 03:13:13.184328 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 03:13:13.188207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 03:13:13.189857 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 03:13:13.239387 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 03:13:13.260452 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 03:13:13.284940 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 03:13:13.292251 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 03:13:13.302014 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 03:13:13.309154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 03:13:13.309418 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 03:13:13.322225 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 03:13:13.326618 systemd[1]: Stopped target basic.target - Basic System. Apr 16 03:13:13.333305 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 03:13:13.336482 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 03:13:13.343518 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 03:13:13.347462 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 03:13:13.363579 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 03:13:13.371147 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 03:13:13.390811 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 03:13:13.401472 systemd[1]: Stopped target swap.target - Swaps. Apr 16 03:13:13.403996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 03:13:13.405532 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 03:13:13.414057 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 03:13:13.423476 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 03:13:13.427034 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 03:13:13.430318 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 03:13:13.440514 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 03:13:13.440768 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 03:13:13.451117 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 03:13:13.451346 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 03:13:13.463773 systemd[1]: Stopped target paths.target - Path Units. Apr 16 03:13:13.467081 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 03:13:13.473034 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 03:13:13.474522 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 03:13:13.479972 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 03:13:13.483596 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 03:13:13.483778 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 03:13:13.485513 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 03:13:13.485627 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 03:13:13.507029 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 03:13:13.507316 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 03:13:13.520492 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 03:13:13.520795 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 03:13:13.543174 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 03:13:13.568034 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 03:13:13.570019 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 03:13:13.570198 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 03:13:13.592123 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 03:13:13.592258 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 03:13:13.610039 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 03:13:13.618156 ignition[1009]: INFO : Ignition 2.19.0 Apr 16 03:13:13.618156 ignition[1009]: INFO : Stage: umount Apr 16 03:13:13.618156 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 03:13:13.618156 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:13:13.618156 ignition[1009]: INFO : umount: umount passed Apr 16 03:13:13.618156 ignition[1009]: INFO : Ignition finished successfully Apr 16 03:13:13.631731 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 03:13:13.631889 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 03:13:13.633882 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 03:13:13.634025 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 03:13:13.641583 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 03:13:13.641930 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 03:13:13.650325 systemd[1]: Stopped target network.target - Network. Apr 16 03:13:13.652500 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 03:13:13.652590 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 03:13:13.658767 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 03:13:13.658867 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 03:13:13.671940 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 03:13:13.672047 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 03:13:13.679141 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 03:13:13.679236 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 03:13:13.686924 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 03:13:13.689027 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 03:13:13.694189 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 03:13:13.696415 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 03:13:13.708567 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 03:13:13.708781 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 03:13:13.709747 systemd-networkd[782]: eth0: DHCPv6 lease lost Apr 16 03:13:13.711884 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 03:13:13.711951 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 03:13:13.718588 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 03:13:13.718755 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 03:13:13.725023 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 03:13:13.725078 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 03:13:13.746435 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 03:13:13.751539 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 03:13:13.751648 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 03:13:13.754432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 03:13:13.754501 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 03:13:13.754572 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 03:13:13.754610 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 03:13:13.754875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 03:13:13.787574 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 03:13:13.789757 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 03:13:13.795669 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 03:13:13.795783 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 03:13:13.798300 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 03:13:13.798356 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 03:13:13.803056 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 03:13:13.803204 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 03:13:13.813951 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 03:13:13.814042 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 03:13:13.817224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 03:13:13.817304 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 03:13:13.846149 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 03:13:13.850567 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 03:13:13.850673 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 03:13:13.855586 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 03:13:13.855658 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 03:13:13.863412 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 03:13:13.863484 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 03:13:13.870457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 03:13:13.870536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:13:13.874283 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 03:13:13.874401 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 03:13:13.877724 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 03:13:13.877833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 03:13:13.884416 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 03:13:13.910570 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 03:13:13.927520 systemd[1]: Switching root. Apr 16 03:13:13.960641 systemd-journald[195]: Journal stopped Apr 16 03:13:15.844174 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 16 03:13:15.844257 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 03:13:15.844278 kernel: SELinux: policy capability open_perms=1 Apr 16 03:13:15.844291 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 03:13:15.844309 kernel: SELinux: policy capability always_check_network=0 Apr 16 03:13:15.844327 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 03:13:15.844346 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 03:13:15.844359 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 03:13:15.844374 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 03:13:15.844390 kernel: audit: type=1403 audit(1776309194.328:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 03:13:15.844405 systemd[1]: Successfully loaded SELinux policy in 63.224ms. Apr 16 03:13:15.844427 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.940ms. Apr 16 03:13:15.844442 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 03:13:15.844458 systemd[1]: Detected virtualization kvm. Apr 16 03:13:15.844472 systemd[1]: Detected architecture x86-64. Apr 16 03:13:15.844487 systemd[1]: Detected first boot. Apr 16 03:13:15.844501 systemd[1]: Initializing machine ID from VM UUID. Apr 16 03:13:15.844516 zram_generator::config[1071]: No configuration found. Apr 16 03:13:15.844534 systemd[1]: Populated /etc with preset unit settings. Apr 16 03:13:15.844548 systemd[1]: Queued start job for default target multi-user.target. Apr 16 03:13:15.844562 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 03:13:15.844578 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 03:13:15.844593 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 03:13:15.844607 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 03:13:15.844621 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 03:13:15.844636 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 03:13:15.844653 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 03:13:15.844667 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 03:13:15.844711 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 03:13:15.844729 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 03:13:15.844743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 03:13:15.844757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 03:13:15.844771 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 03:13:15.844786 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 03:13:15.844803 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 03:13:15.844817 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 03:13:15.844832 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 03:13:15.844863 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 03:13:15.844878 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 03:13:15.844893 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 03:13:15.844907 systemd[1]: Reached target slices.target - Slice Units. Apr 16 03:13:15.844921 systemd[1]: Reached target swap.target - Swaps. Apr 16 03:13:15.844935 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 03:13:15.844952 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 03:13:15.844966 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 03:13:15.844981 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 03:13:15.844996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 03:13:15.845011 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 03:13:15.845029 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 03:13:15.845043 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 03:13:15.845061 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 03:13:15.845075 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 03:13:15.845092 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 03:13:15.845107 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:13:15.845122 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 03:13:15.845136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 03:13:15.845151 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 03:13:15.845165 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 03:13:15.845179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 03:13:15.845194 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 03:13:15.845210 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 03:13:15.845224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 03:13:15.845239 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 03:13:15.845253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 03:13:15.845267 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 03:13:15.845281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 03:13:15.845301 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 03:13:15.845315 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 16 03:13:15.845330 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 16 03:13:15.845346 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 03:13:15.845361 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 03:13:15.845376 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 03:13:15.845391 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 03:13:15.845428 systemd-journald[1156]: Collecting audit messages is disabled. Apr 16 03:13:15.845455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 03:13:15.845471 systemd-journald[1156]: Journal started Apr 16 03:13:15.845503 systemd-journald[1156]: Runtime Journal (/run/log/journal/d70ef9660dc449dc86c013b7f9cf2b23) is 6.0M, max 48.4M, 42.3M free. Apr 16 03:13:15.854799 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:13:15.865730 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 03:13:15.872545 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 03:13:15.879152 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 03:13:15.889640 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 03:13:15.895279 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 03:13:15.907492 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 03:13:15.917112 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 03:13:15.921562 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 03:13:15.927652 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 03:13:15.930634 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 03:13:15.930918 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 03:13:15.933925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 03:13:15.934097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 03:13:15.939916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 03:13:15.940105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 03:13:15.946058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 03:13:15.948796 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 03:13:15.954827 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 03:13:15.960666 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 03:13:15.978667 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 03:13:15.998001 kernel: fuse: init (API version 7.39) Apr 16 03:13:15.998546 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 03:13:16.001379 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 03:13:16.009096 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 03:13:16.013891 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 03:13:16.024036 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 03:13:16.032800 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 03:13:16.043954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 03:13:16.050383 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 03:13:16.066905 kernel: loop: module loaded Apr 16 03:13:16.069940 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 03:13:16.073562 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 03:13:16.089607 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 03:13:16.089885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 03:13:16.097323 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 03:13:16.109510 systemd-journald[1156]: Time spent on flushing to /var/log/journal/d70ef9660dc449dc86c013b7f9cf2b23 is 52.760ms for 939 entries. Apr 16 03:13:16.109510 systemd-journald[1156]: System Journal (/var/log/journal/d70ef9660dc449dc86c013b7f9cf2b23) is 8.0M, max 195.6M, 187.6M free. Apr 16 03:13:16.249179 systemd-journald[1156]: Received client request to flush runtime journal. Apr 16 03:13:16.249248 kernel: ACPI: bus type drm_connector registered Apr 16 03:13:16.112197 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 03:13:16.112399 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 03:13:16.126068 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 03:13:16.139350 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 03:13:16.154990 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 03:13:16.159630 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 03:13:16.160513 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 03:13:16.164565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 03:13:16.166641 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 16 03:13:16.166656 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 16 03:13:16.173213 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 03:13:16.178869 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 03:13:16.185932 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 16 03:13:16.193653 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 03:13:16.251335 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 03:13:16.276756 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 03:13:16.293954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 03:13:16.320929 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Apr 16 03:13:16.321551 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Apr 16 03:13:16.330771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 03:13:17.331543 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 03:13:17.353088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 03:13:17.398395 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Apr 16 03:13:17.450396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 03:13:17.472094 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 03:13:17.485078 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 16 03:13:17.496721 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1251) Apr 16 03:13:17.513900 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 03:13:17.854767 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 16 03:13:17.863265 kernel: ACPI: button: Power Button [PWRF] Apr 16 03:13:17.890471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 03:13:18.009813 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 16 03:13:18.144424 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 03:13:18.267574 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 03:13:18.338938 systemd-networkd[1244]: lo: Link UP Apr 16 03:13:18.339384 systemd-networkd[1244]: lo: Gained carrier Apr 16 03:13:18.341375 systemd-networkd[1244]: Enumeration completed Apr 16 03:13:18.341738 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 03:13:18.345822 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 03:13:18.346306 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 03:13:18.347489 systemd-networkd[1244]: eth0: Link UP Apr 16 03:13:18.347617 systemd-networkd[1244]: eth0: Gained carrier Apr 16 03:13:18.347759 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 03:13:18.448041 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 03:13:18.448837 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 03:13:18.449053 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 03:13:18.448032 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 03:13:18.453753 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 03:13:18.467882 systemd-networkd[1244]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 03:13:18.683428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:13:18.848173 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 03:13:18.858993 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 03:13:18.880931 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 03:13:18.927092 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 03:13:18.932369 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 03:13:18.946153 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 03:13:18.954083 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 03:13:18.995083 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 03:13:19.002520 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 03:13:19.008153 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 03:13:19.009081 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 03:13:19.014116 systemd[1]: Reached target machines.target - Containers. Apr 16 03:13:19.020309 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 03:13:19.055005 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 03:13:19.125808 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 03:13:19.129161 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 03:13:19.132910 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 03:13:19.154175 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 03:13:19.184219 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 03:13:19.195846 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 03:13:19.216389 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 03:13:19.266535 kernel: loop0: detected capacity change from 0 to 140768 Apr 16 03:13:19.272674 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 03:13:19.275143 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 03:13:19.371829 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 03:13:19.419949 kernel: loop1: detected capacity change from 0 to 228704 Apr 16 03:13:19.511666 kernel: loop2: detected capacity change from 0 to 142488 Apr 16 03:13:19.726531 kernel: loop3: detected capacity change from 0 to 140768 Apr 16 03:13:19.799096 kernel: loop4: detected capacity change from 0 to 228704 Apr 16 03:13:19.845725 kernel: loop5: detected capacity change from 0 to 142488 Apr 16 03:13:19.912183 (sd-merge)[1305]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 03:13:19.912856 (sd-merge)[1305]: Merged extensions into '/usr'. Apr 16 03:13:19.930810 systemd[1]: Reloading requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 03:13:19.930830 systemd[1]: Reloading... Apr 16 03:13:20.058495 zram_generator::config[1331]: No configuration found. Apr 16 03:13:20.342177 systemd-networkd[1244]: eth0: Gained IPv6LL Apr 16 03:13:20.526909 ldconfig[1288]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 03:13:20.623516 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 03:13:20.756081 systemd[1]: Reloading finished in 824 ms. Apr 16 03:13:20.787223 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 03:13:20.804128 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 03:13:20.809073 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 03:13:20.847286 systemd[1]: Starting ensure-sysext.service... Apr 16 03:13:20.856375 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 03:13:20.867924 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Apr 16 03:13:20.868302 systemd[1]: Reloading... Apr 16 03:13:20.917649 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 03:13:20.920620 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 03:13:20.921582 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 03:13:20.921952 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Apr 16 03:13:20.922016 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Apr 16 03:13:20.936968 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 03:13:20.937194 systemd-tmpfiles[1380]: Skipping /boot Apr 16 03:13:21.218349 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 03:13:21.218363 systemd-tmpfiles[1380]: Skipping /boot Apr 16 03:13:21.229749 zram_generator::config[1408]: No configuration found. Apr 16 03:13:21.586296 kernel: hrtimer: interrupt took 55441360 ns Apr 16 03:13:21.647355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 03:13:21.764159 systemd[1]: Reloading finished in 893 ms. Apr 16 03:13:21.836059 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 03:13:21.879034 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 03:13:21.892194 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 03:13:21.905198 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 03:13:21.929622 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 03:13:21.939864 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 03:13:21.978257 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 03:13:21.996675 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:13:22.002332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 03:13:22.042866 augenrules[1476]: No rules Apr 16 03:13:22.043959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 03:13:22.072676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 03:13:22.086076 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 03:13:22.097252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 03:13:22.097488 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:13:22.100788 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 03:13:22.109137 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 03:13:22.114555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 03:13:22.114978 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 03:13:22.122615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 03:13:22.124288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 03:13:22.129557 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 03:13:22.129849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 03:13:22.150624 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:13:22.151587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 03:13:22.172140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 03:13:22.179567 systemd-resolved[1463]: Positive Trust Anchors: Apr 16 03:13:22.179641 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 03:13:22.179679 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 03:13:22.180401 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 03:13:22.187987 systemd-resolved[1463]: Defaulting to hostname 'linux'. Apr 16 03:13:22.190357 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 03:13:22.207092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 03:13:22.210104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 03:13:22.213428 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 03:13:22.220744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:13:22.222290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 03:13:22.224177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 03:13:22.238233 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 03:13:22.242932 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 03:13:22.252229 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 03:13:22.252907 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 03:13:22.256381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 03:13:22.256666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 03:13:22.263526 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 03:13:22.263792 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 03:13:22.280010 systemd[1]: Reached target network.target - Network. Apr 16 03:13:22.281965 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 03:13:22.285950 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 03:13:22.290454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 03:13:22.290535 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 03:13:22.290569 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 03:13:22.291226 systemd[1]: Finished ensure-sysext.service. Apr 16 03:13:22.314288 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 03:13:22.377845 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 03:13:22.492954 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 03:13:22.496198 systemd-timesyncd[1513]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 03:13:22.496269 systemd-timesyncd[1513]: Initial clock synchronization to Thu 2026-04-16 03:13:22.813858 UTC. Apr 16 03:13:22.503481 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 03:13:22.518393 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 03:13:22.526000 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 03:13:22.533788 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 03:13:22.541729 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 03:13:22.542192 systemd[1]: Reached target paths.target - Path Units. Apr 16 03:13:22.544559 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 03:13:22.548240 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 03:13:22.553118 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 03:13:22.556121 systemd[1]: Reached target timers.target - Timer Units. Apr 16 03:13:22.564828 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 03:13:22.626679 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 03:13:22.635306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 03:13:22.648629 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 03:13:22.651907 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 03:13:22.657137 systemd[1]: Reached target basic.target - Basic System. Apr 16 03:13:22.664787 systemd[1]: System is tainted: cgroupsv1 Apr 16 03:13:22.664944 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 03:13:22.664982 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 03:13:22.668387 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 03:13:22.683586 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 03:13:22.691188 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 03:13:22.700150 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 03:13:22.719842 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 03:13:22.731286 jq[1523]: false Apr 16 03:13:22.736433 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 03:13:22.753747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:13:22.764142 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 03:13:22.776425 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 03:13:22.790857 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 03:13:22.794011 extend-filesystems[1524]: Found loop3 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found loop4 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found loop5 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found sr0 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found vda Apr 16 03:13:22.796762 extend-filesystems[1524]: Found vda1 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found vda2 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found vda3 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found usr Apr 16 03:13:22.796762 extend-filesystems[1524]: Found vda4 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found vda6 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found vda7 Apr 16 03:13:22.796762 extend-filesystems[1524]: Found vda9 Apr 16 03:13:22.830464 extend-filesystems[1524]: Checking size of /dev/vda9 Apr 16 03:13:22.820592 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 03:13:22.802226 dbus-daemon[1522]: [system] SELinux support is enabled Apr 16 03:13:22.852337 extend-filesystems[1524]: Resized partition /dev/vda9 Apr 16 03:13:22.853210 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 03:13:22.860910 extend-filesystems[1551]: resize2fs 1.47.1 (20-May-2024) Apr 16 03:13:22.875443 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 03:13:22.880295 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 03:13:22.891442 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 03:13:22.900137 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 03:13:22.911424 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 03:13:22.918892 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 03:13:22.918564 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 03:13:22.928526 jq[1564]: true Apr 16 03:13:22.942910 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1557) Apr 16 03:13:22.933513 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 03:13:22.933837 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 03:13:22.945588 extend-filesystems[1551]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 03:13:22.945588 extend-filesystems[1551]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 03:13:22.945588 extend-filesystems[1551]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 03:13:22.943589 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 03:13:22.973316 extend-filesystems[1524]: Resized filesystem in /dev/vda9 Apr 16 03:13:22.943914 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 03:13:22.946525 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 03:13:22.946838 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 03:13:22.962303 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 03:13:22.979630 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 03:13:22.980019 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 03:13:22.992832 update_engine[1560]: I20260416 03:13:22.989617 1560 main.cc:92] Flatcar Update Engine starting Apr 16 03:13:23.003875 update_engine[1560]: I20260416 03:13:23.002676 1560 update_check_scheduler.cc:74] Next update check in 4m19s Apr 16 03:13:23.026007 jq[1575]: true Apr 16 03:13:23.050185 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 03:13:23.050247 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 03:13:23.050609 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 03:13:23.088473 tar[1573]: linux-amd64/LICENSE Apr 16 03:13:23.090954 tar[1573]: linux-amd64/helm Apr 16 03:13:23.088990 systemd-logind[1554]: Watching system buttons on /dev/input/event1 (Power Button) Apr 16 03:13:23.089011 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 03:13:23.094635 systemd-logind[1554]: New seat seat0. Apr 16 03:13:23.103427 systemd[1]: Started update-engine.service - Update Engine. Apr 16 03:13:23.128184 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Apr 16 03:13:23.122970 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 03:13:23.139769 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 03:13:23.140078 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 03:13:23.140184 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 03:13:23.142614 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 03:13:23.148255 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 03:13:23.148280 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 03:13:23.152360 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 03:13:23.167404 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 03:13:23.177318 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 03:13:23.442528 locksmithd[1615]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 03:13:24.030430 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 03:13:24.159278 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 03:13:24.192258 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 03:13:24.298689 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 03:13:24.300379 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 03:13:24.322696 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 03:13:24.503595 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 03:13:24.591586 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 03:13:24.655863 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 03:13:24.687917 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 03:13:24.954102 containerd[1577]: time="2026-04-16T03:13:24.953901658Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 03:13:25.261842 containerd[1577]: time="2026-04-16T03:13:25.249266391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 03:13:25.313125 containerd[1577]: time="2026-04-16T03:13:25.307452092Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 03:13:25.313125 containerd[1577]: time="2026-04-16T03:13:25.311155797Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 03:13:25.314419 containerd[1577]: time="2026-04-16T03:13:25.314034915Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 03:13:25.315174 containerd[1577]: time="2026-04-16T03:13:25.315030770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 03:13:25.315265 containerd[1577]: time="2026-04-16T03:13:25.315253140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 03:13:25.315528 containerd[1577]: time="2026-04-16T03:13:25.315502447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 03:13:25.315597 containerd[1577]: time="2026-04-16T03:13:25.315584037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 03:13:25.316566 containerd[1577]: time="2026-04-16T03:13:25.316532349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 03:13:25.316647 containerd[1577]: time="2026-04-16T03:13:25.316633935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 03:13:25.316706 containerd[1577]: time="2026-04-16T03:13:25.316692600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 03:13:25.316795 containerd[1577]: time="2026-04-16T03:13:25.316783308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 03:13:25.316948 containerd[1577]: time="2026-04-16T03:13:25.316936264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 03:13:25.327503 containerd[1577]: time="2026-04-16T03:13:25.326960295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 03:13:25.334907 containerd[1577]: time="2026-04-16T03:13:25.334695096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 03:13:25.336124 containerd[1577]: time="2026-04-16T03:13:25.335324444Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 03:13:25.336124 containerd[1577]: time="2026-04-16T03:13:25.335677259Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 03:13:25.336124 containerd[1577]: time="2026-04-16T03:13:25.335839975Z" level=info msg="metadata content store policy set" policy=shared Apr 16 03:13:25.378128 containerd[1577]: time="2026-04-16T03:13:25.377572796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 03:13:25.378696 containerd[1577]: time="2026-04-16T03:13:25.378624610Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 03:13:25.378696 containerd[1577]: time="2026-04-16T03:13:25.378692399Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 03:13:25.378784 containerd[1577]: time="2026-04-16T03:13:25.378755358Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 03:13:25.379775 containerd[1577]: time="2026-04-16T03:13:25.379075548Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 03:13:25.379838 containerd[1577]: time="2026-04-16T03:13:25.379812872Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.380599844Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.386438306Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.386610610Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.386635866Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.386764403Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.386784828Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.386800876Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.386836758Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.386857267Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.386967859Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.387005795Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.387025072Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.387267268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.388139 containerd[1577]: time="2026-04-16T03:13:25.387395289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.389695 containerd[1577]: time="2026-04-16T03:13:25.387417293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.389695 containerd[1577]: time="2026-04-16T03:13:25.387435547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.389695 containerd[1577]: time="2026-04-16T03:13:25.387473671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.399059 containerd[1577]: time="2026-04-16T03:13:25.390513529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.434622 containerd[1577]: time="2026-04-16T03:13:25.414083539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.457408509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.458295451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.458376649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.458404009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.458421622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.458439397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.458552857Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.458591285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.458803218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.458829331Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.459148892Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.459246711Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 03:13:25.460684 containerd[1577]: time="2026-04-16T03:13:25.459264184Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 03:13:25.464001 containerd[1577]: time="2026-04-16T03:13:25.459280365Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 03:13:25.464001 containerd[1577]: time="2026-04-16T03:13:25.459294105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.464001 containerd[1577]: time="2026-04-16T03:13:25.459624271Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 03:13:25.464001 containerd[1577]: time="2026-04-16T03:13:25.459808650Z" level=info msg="NRI interface is disabled by configuration." Apr 16 03:13:25.464001 containerd[1577]: time="2026-04-16T03:13:25.460034948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 03:13:25.464150 containerd[1577]: time="2026-04-16T03:13:25.463174420Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 03:13:25.464150 containerd[1577]: time="2026-04-16T03:13:25.463430586Z" level=info msg="Connect containerd service" Apr 16 03:13:25.464150 containerd[1577]: time="2026-04-16T03:13:25.463489122Z" level=info msg="using legacy CRI server" Apr 16 03:13:25.464150 containerd[1577]: time="2026-04-16T03:13:25.463498234Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 03:13:25.464150 containerd[1577]: time="2026-04-16T03:13:25.463909215Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 03:13:25.470337 containerd[1577]: time="2026-04-16T03:13:25.470075309Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 03:13:25.470776 containerd[1577]: time="2026-04-16T03:13:25.470735866Z" level=info msg="Start subscribing containerd event" Apr 16 03:13:25.471930 containerd[1577]: time="2026-04-16T03:13:25.471146266Z" level=info msg="Start recovering state" Apr 16 03:13:25.471930 containerd[1577]: time="2026-04-16T03:13:25.471365628Z" level=info msg="Start event monitor" Apr 16 03:13:25.471930 containerd[1577]: time="2026-04-16T03:13:25.471383339Z" level=info msg="Start snapshots syncer" Apr 16 03:13:25.471930 containerd[1577]: time="2026-04-16T03:13:25.471396520Z" level=info msg="Start cni network conf syncer for default" Apr 16 03:13:25.471930 containerd[1577]: time="2026-04-16T03:13:25.471407590Z" level=info msg="Start streaming server" Apr 16 03:13:25.478176 containerd[1577]: time="2026-04-16T03:13:25.477960865Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 03:13:25.479115 containerd[1577]: time="2026-04-16T03:13:25.479049267Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 03:13:25.481993 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 03:13:25.489234 containerd[1577]: time="2026-04-16T03:13:25.488915967Z" level=info msg="containerd successfully booted in 0.536878s" Apr 16 03:13:26.561793 tar[1573]: linux-amd64/README.md Apr 16 03:13:26.737762 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 03:13:29.124411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:13:29.168193 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 03:13:29.168314 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:13:29.177095 systemd[1]: Startup finished in 13.205s (kernel) + 14.904s (userspace) = 28.109s. Apr 16 03:13:29.366802 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 03:13:29.756464 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:46108.service - OpenSSH per-connection server daemon (10.0.0.1:46108). Apr 16 03:13:30.359748 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 46108 ssh2: RSA SHA256:WKYbEvvfayyhH9eGsmnIye8AXtS8l5sPaY8Y29cYeKg Apr 16 03:13:30.369355 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:13:30.706286 systemd-logind[1554]: New session 1 of user core. Apr 16 03:13:30.734418 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 03:13:30.773595 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 03:13:31.060327 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 03:13:31.136805 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 03:13:31.247253 (systemd)[1679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 03:13:31.985352 systemd[1679]: Queued start job for default target default.target. Apr 16 03:13:31.988089 systemd[1679]: Created slice app.slice - User Application Slice. Apr 16 03:13:31.988128 systemd[1679]: Reached target paths.target - Paths. Apr 16 03:13:31.990399 systemd[1679]: Reached target timers.target - Timers. Apr 16 03:13:32.007514 systemd[1679]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 03:13:32.042877 systemd[1679]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 03:13:32.042936 systemd[1679]: Reached target sockets.target - Sockets. Apr 16 03:13:32.042947 systemd[1679]: Reached target basic.target - Basic System. Apr 16 03:13:32.042985 systemd[1679]: Reached target default.target - Main User Target. Apr 16 03:13:32.043005 systemd[1679]: Startup finished in 747ms. Apr 16 03:13:32.048744 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 03:13:32.097773 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 03:13:32.419664 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:46112.service - OpenSSH per-connection server daemon (10.0.0.1:46112). Apr 16 03:13:32.668261 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 46112 ssh2: RSA SHA256:WKYbEvvfayyhH9eGsmnIye8AXtS8l5sPaY8Y29cYeKg Apr 16 03:13:32.677913 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:13:32.802828 systemd-logind[1554]: New session 2 of user core. Apr 16 03:13:32.835596 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 03:13:33.162859 sshd[1692]: pam_unix(sshd:session): session closed for user core Apr 16 03:13:33.344103 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:46124.service - OpenSSH per-connection server daemon (10.0.0.1:46124). Apr 16 03:13:33.361362 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:46112.service: Deactivated successfully. Apr 16 03:13:33.366144 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 03:13:33.367126 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. Apr 16 03:13:33.384012 systemd-logind[1554]: Removed session 2. Apr 16 03:13:33.435797 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 46124 ssh2: RSA SHA256:WKYbEvvfayyhH9eGsmnIye8AXtS8l5sPaY8Y29cYeKg Apr 16 03:13:33.440313 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:13:33.462794 systemd-logind[1554]: New session 3 of user core. Apr 16 03:13:33.482967 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 03:13:33.605736 sshd[1697]: pam_unix(sshd:session): session closed for user core Apr 16 03:13:33.636048 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:46124.service: Deactivated successfully. Apr 16 03:13:33.645096 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. Apr 16 03:13:33.646047 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 03:13:33.656687 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:46126.service - OpenSSH per-connection server daemon (10.0.0.1:46126). Apr 16 03:13:33.669061 systemd-logind[1554]: Removed session 3. Apr 16 03:13:33.768669 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 46126 ssh2: RSA SHA256:WKYbEvvfayyhH9eGsmnIye8AXtS8l5sPaY8Y29cYeKg Apr 16 03:13:33.777558 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:13:33.846202 systemd-logind[1554]: New session 4 of user core. Apr 16 03:13:34.044915 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 03:13:34.299470 sshd[1710]: pam_unix(sshd:session): session closed for user core Apr 16 03:13:34.387768 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:46126.service: Deactivated successfully. Apr 16 03:13:34.413829 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 03:13:34.455251 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. Apr 16 03:13:34.502754 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:46140.service - OpenSSH per-connection server daemon (10.0.0.1:46140). Apr 16 03:13:34.536501 systemd-logind[1554]: Removed session 4. Apr 16 03:13:35.002929 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 46140 ssh2: RSA SHA256:WKYbEvvfayyhH9eGsmnIye8AXtS8l5sPaY8Y29cYeKg Apr 16 03:13:35.052073 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:13:35.170936 kubelet[1663]: E0416 03:13:35.169795 1663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:13:35.284417 systemd-logind[1554]: New session 5 of user core. Apr 16 03:13:35.296051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:13:35.296571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:13:35.401206 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 03:13:35.694162 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 03:13:35.700373 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 03:13:35.752574 sudo[1724]: pam_unix(sudo:session): session closed for user root Apr 16 03:13:35.798406 sshd[1718]: pam_unix(sshd:session): session closed for user core Apr 16 03:13:35.926597 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:45988.service - OpenSSH per-connection server daemon (10.0.0.1:45988). Apr 16 03:13:35.941516 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:46140.service: Deactivated successfully. Apr 16 03:13:35.963601 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 03:13:36.008474 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. Apr 16 03:13:36.064604 systemd-logind[1554]: Removed session 5. Apr 16 03:13:36.340983 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 45988 ssh2: RSA SHA256:WKYbEvvfayyhH9eGsmnIye8AXtS8l5sPaY8Y29cYeKg Apr 16 03:13:36.367447 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:13:36.671818 systemd-logind[1554]: New session 6 of user core. Apr 16 03:13:36.731439 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 03:13:37.364180 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 03:13:37.390167 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 03:13:37.569565 sudo[1734]: pam_unix(sudo:session): session closed for user root Apr 16 03:13:37.734279 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 03:13:37.743169 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 03:13:38.382206 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 03:13:38.497787 auditctl[1737]: No rules Apr 16 03:13:38.504509 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 03:13:38.509675 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 03:13:38.578047 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 03:13:39.027502 augenrules[1756]: No rules Apr 16 03:13:39.054784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 03:13:39.075635 sudo[1733]: pam_unix(sudo:session): session closed for user root Apr 16 03:13:39.128651 sshd[1727]: pam_unix(sshd:session): session closed for user core Apr 16 03:13:39.173097 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:46018.service - OpenSSH per-connection server daemon (10.0.0.1:46018). Apr 16 03:13:39.174817 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:45988.service: Deactivated successfully. Apr 16 03:13:39.185042 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 03:13:39.215238 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. Apr 16 03:13:39.233142 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 46018 ssh2: RSA SHA256:WKYbEvvfayyhH9eGsmnIye8AXtS8l5sPaY8Y29cYeKg Apr 16 03:13:39.236151 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:13:39.236240 systemd-logind[1554]: Removed session 6. Apr 16 03:13:39.425014 systemd-logind[1554]: New session 7 of user core. Apr 16 03:13:39.450773 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 03:13:39.612893 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 03:13:39.615040 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 03:13:42.914174 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 03:13:42.960977 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 03:13:45.617501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 03:13:45.658766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:13:46.621754 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:13:46.621923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:13:47.010010 dockerd[1788]: time="2026-04-16T03:13:47.009503143Z" level=info msg="Starting up" Apr 16 03:13:49.120401 kubelet[1808]: E0416 03:13:49.118866 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:13:49.138158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:13:49.143072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:13:49.154037 dockerd[1788]: time="2026-04-16T03:13:49.149430774Z" level=info msg="Loading containers: start." Apr 16 03:13:50.886782 kernel: Initializing XFRM netlink socket Apr 16 03:13:51.806273 systemd-networkd[1244]: docker0: Link UP Apr 16 03:13:52.057283 dockerd[1788]: time="2026-04-16T03:13:52.056352151Z" level=info msg="Loading containers: done." Apr 16 03:13:52.271602 dockerd[1788]: time="2026-04-16T03:13:52.271192639Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 03:13:52.300209 dockerd[1788]: time="2026-04-16T03:13:52.297272729Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 03:13:52.305387 dockerd[1788]: time="2026-04-16T03:13:52.301847924Z" level=info msg="Daemon has completed initialization" Apr 16 03:13:52.373011 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3991800926-merged.mount: Deactivated successfully. Apr 16 03:13:54.058948 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 03:13:54.064380 dockerd[1788]: time="2026-04-16T03:13:54.058678107Z" level=info msg="API listen on /run/docker.sock" Apr 16 03:13:59.174918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 03:13:59.219143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:14:00.030235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:14:00.046918 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:14:00.941901 kubelet[1964]: E0416 03:14:00.941818 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:14:00.948393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:14:00.950627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:14:01.327777 containerd[1577]: time="2026-04-16T03:14:01.327436570Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 03:14:04.453972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060747605.mount: Deactivated successfully. Apr 16 03:14:08.088062 update_engine[1560]: I20260416 03:14:08.087700 1560 update_attempter.cc:509] Updating boot flags... Apr 16 03:14:08.793212 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1999) Apr 16 03:14:08.952325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2003) Apr 16 03:14:11.184872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 03:14:11.268029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:14:12.343465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:14:12.369398 (kubelet)[2061]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:14:14.124647 kubelet[2061]: E0416 03:14:14.124584 2061 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:14:14.136296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:14:14.140461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:14:16.141217 containerd[1577]: time="2026-04-16T03:14:16.140880235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:16.147153 containerd[1577]: time="2026-04-16T03:14:16.142426938Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 16 03:14:16.265186 containerd[1577]: time="2026-04-16T03:14:16.264376131Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:17.308843 containerd[1577]: time="2026-04-16T03:14:17.308473052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:17.440786 containerd[1577]: time="2026-04-16T03:14:17.440260776Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 16.112709464s" Apr 16 03:14:17.450919 containerd[1577]: time="2026-04-16T03:14:17.444491994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 16 03:14:17.558634 containerd[1577]: time="2026-04-16T03:14:17.558457812Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 03:14:24.268514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 03:14:24.814447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:14:25.704143 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:14:25.704428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:14:26.506961 kubelet[2088]: E0416 03:14:26.506899 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:14:26.512453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:14:26.513052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:14:28.101206 containerd[1577]: time="2026-04-16T03:14:28.100739572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:28.104014 containerd[1577]: time="2026-04-16T03:14:28.103764125Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 16 03:14:28.131260 containerd[1577]: time="2026-04-16T03:14:28.130971756Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:28.192566 containerd[1577]: time="2026-04-16T03:14:28.192379904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:28.194467 containerd[1577]: time="2026-04-16T03:14:28.194398709Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 10.635238419s" Apr 16 03:14:28.194467 containerd[1577]: time="2026-04-16T03:14:28.194442520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 16 03:14:28.199758 containerd[1577]: time="2026-04-16T03:14:28.197534265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 03:14:34.241788 containerd[1577]: time="2026-04-16T03:14:34.241505898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:34.242986 containerd[1577]: time="2026-04-16T03:14:34.242807507Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 16 03:14:34.281286 containerd[1577]: time="2026-04-16T03:14:34.279603640Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:34.465739 containerd[1577]: time="2026-04-16T03:14:34.465461097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:34.501467 containerd[1577]: time="2026-04-16T03:14:34.499641858Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 6.301054808s" Apr 16 03:14:34.501467 containerd[1577]: time="2026-04-16T03:14:34.501256284Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 16 03:14:34.507178 containerd[1577]: time="2026-04-16T03:14:34.507102564Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 03:14:36.708569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 03:14:36.741123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:14:37.437412 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:14:37.439162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:14:37.777854 kubelet[2116]: E0416 03:14:37.776242 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:14:37.791241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:14:37.797196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:14:47.939990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 16 03:14:48.057950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:14:49.047511 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:14:49.048133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:14:50.482745 kubelet[2139]: E0416 03:14:50.482127 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:14:50.554476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:14:50.555001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:14:52.145385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3565068538.mount: Deactivated successfully. Apr 16 03:14:57.101342 containerd[1577]: time="2026-04-16T03:14:57.100851276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:57.110265 containerd[1577]: time="2026-04-16T03:14:57.109991904Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 16 03:14:57.140556 containerd[1577]: time="2026-04-16T03:14:57.136644949Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:57.444540 containerd[1577]: time="2026-04-16T03:14:57.444240970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:14:57.578970 containerd[1577]: time="2026-04-16T03:14:57.577765030Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 23.070536955s" Apr 16 03:14:57.580899 containerd[1577]: time="2026-04-16T03:14:57.579674689Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 16 03:14:57.604774 containerd[1577]: time="2026-04-16T03:14:57.604653618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 03:14:59.761117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592325579.mount: Deactivated successfully. Apr 16 03:15:00.669585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 16 03:15:00.744409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:15:01.185154 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:15:01.186648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:15:01.415051 kubelet[2174]: E0416 03:15:01.414946 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:15:01.423081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:15:01.423779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:15:08.010638 containerd[1577]: time="2026-04-16T03:15:08.010539262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:15:08.014445 containerd[1577]: time="2026-04-16T03:15:08.010999471Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 16 03:15:08.080041 containerd[1577]: time="2026-04-16T03:15:08.077403985Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:15:08.370069 containerd[1577]: time="2026-04-16T03:15:08.368841449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:15:08.509676 containerd[1577]: time="2026-04-16T03:15:08.509435017Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 10.904529558s" Apr 16 03:15:08.509676 containerd[1577]: time="2026-04-16T03:15:08.509646090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 16 03:15:08.532858 containerd[1577]: time="2026-04-16T03:15:08.530138542Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 03:15:10.428668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795920195.mount: Deactivated successfully. Apr 16 03:15:10.615094 containerd[1577]: time="2026-04-16T03:15:10.614799726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:15:10.620635 containerd[1577]: time="2026-04-16T03:15:10.620084316Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 03:15:10.687590 containerd[1577]: time="2026-04-16T03:15:10.686317091Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:15:11.433383 containerd[1577]: time="2026-04-16T03:15:11.429346729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:15:11.713576 containerd[1577]: time="2026-04-16T03:15:11.710055363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.172703076s" Apr 16 03:15:11.713576 containerd[1577]: time="2026-04-16T03:15:11.711034914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 16 03:15:11.742150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 16 03:15:11.791811 containerd[1577]: time="2026-04-16T03:15:11.791344275Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 03:15:11.796770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:15:13.034106 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:15:13.036877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:15:13.954139 kubelet[2240]: E0416 03:15:13.954042 2240 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:15:13.971438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:15:13.980552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:15:14.598841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266220177.mount: Deactivated successfully. Apr 16 03:15:24.205961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 16 03:15:24.239120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:15:25.130206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:15:25.144186 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:15:26.551042 kubelet[2272]: E0416 03:15:26.549961 2272 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:15:26.562319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:15:26.564317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:15:31.264359 containerd[1577]: time="2026-04-16T03:15:31.264177097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:15:31.267612 containerd[1577]: time="2026-04-16T03:15:31.267296295Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 16 03:15:31.287828 containerd[1577]: time="2026-04-16T03:15:31.287516272Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:15:31.449922 containerd[1577]: time="2026-04-16T03:15:31.449665945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:15:31.459195 containerd[1577]: time="2026-04-16T03:15:31.457560101Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 19.664294972s" Apr 16 03:15:31.462379 containerd[1577]: time="2026-04-16T03:15:31.461294095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 16 03:15:36.695249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 16 03:15:36.717319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:15:37.298223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:15:37.307213 (kubelet)[2369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:15:37.578984 kubelet[2369]: E0416 03:15:37.578591 2369 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:15:37.586525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:15:37.586946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:15:47.682325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 16 03:15:47.749175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:15:48.838654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:15:48.869339 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:15:50.238424 kubelet[2390]: E0416 03:15:50.235450 2390 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:15:50.247259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:15:50.249349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:15:53.374718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:15:53.606184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:15:54.471638 systemd[1]: Reloading requested from client PID 2411 ('systemctl') (unit session-7.scope)... Apr 16 03:15:54.471812 systemd[1]: Reloading... Apr 16 03:15:56.808075 zram_generator::config[2454]: No configuration found. Apr 16 03:15:59.107480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 03:16:00.044801 systemd[1]: Reloading finished in 5570 ms. Apr 16 03:16:00.924292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:16:00.950035 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 03:16:01.083353 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:16:01.087796 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 03:16:01.088460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:16:01.166655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:16:03.636110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:16:03.657672 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 03:16:07.815772 kubelet[2520]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 03:16:07.815772 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 03:16:07.815772 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 03:16:07.821133 kubelet[2520]: I0416 03:16:07.816012 2520 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 03:16:09.996882 kubelet[2520]: I0416 03:16:09.995907 2520 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 03:16:10.011167 kubelet[2520]: I0416 03:16:10.008032 2520 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 03:16:10.141935 kubelet[2520]: I0416 03:16:10.141114 2520 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 03:16:10.563266 kubelet[2520]: E0416 03:16:10.554672 2520 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:16:10.724133 kubelet[2520]: I0416 03:16:10.721262 2520 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 03:16:11.389170 kubelet[2520]: E0416 03:16:11.388969 2520 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 03:16:11.389170 kubelet[2520]: I0416 03:16:11.389250 2520 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 03:16:11.748106 kubelet[2520]: I0416 03:16:11.733102 2520 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 03:16:11.875388 kubelet[2520]: I0416 03:16:11.872107 2520 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 03:16:12.067560 kubelet[2520]: I0416 03:16:11.876610 2520 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 16 03:16:12.078998 kubelet[2520]: I0416 03:16:12.077251 2520 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 03:16:12.118996 kubelet[2520]: I0416 03:16:12.118157 2520 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 03:16:12.164064 kubelet[2520]: I0416 03:16:12.158053 2520 state_mem.go:36] "Initialized new in-memory state store" Apr 16 03:16:12.312135 kubelet[2520]: I0416 03:16:12.311658 2520 kubelet.go:480] "Attempting to sync node with API server" Apr 16 03:16:12.313454 kubelet[2520]: I0416 03:16:12.312514 2520 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 03:16:12.314198 kubelet[2520]: I0416 03:16:12.314123 2520 kubelet.go:386] "Adding apiserver pod source" Apr 16 03:16:12.314525 kubelet[2520]: I0416 03:16:12.314496 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 03:16:12.343092 kubelet[2520]: E0416 03:16:12.341483 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 03:16:12.349619 kubelet[2520]: E0416 03:16:12.341488 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 03:16:12.380752 kubelet[2520]: I0416 03:16:12.380345 2520 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 03:16:12.405606 kubelet[2520]: I0416 03:16:12.405418 2520 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 03:16:12.431369 kubelet[2520]: W0416 03:16:12.430934 2520 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 03:16:12.565383 kubelet[2520]: I0416 03:16:12.561289 2520 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 03:16:12.575911 kubelet[2520]: I0416 03:16:12.574944 2520 server.go:1289] "Started kubelet" Apr 16 03:16:12.601835 kubelet[2520]: I0416 03:16:12.597502 2520 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 03:16:12.616754 kubelet[2520]: I0416 03:16:12.616672 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 03:16:12.626061 kubelet[2520]: E0416 03:16:12.616974 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b7f68b0eee42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,LastTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:16:12.627516 kubelet[2520]: I0416 03:16:12.626854 2520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 03:16:12.628651 kubelet[2520]: I0416 03:16:12.628339 2520 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 03:16:12.654163 kubelet[2520]: I0416 03:16:12.646072 2520 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 03:16:12.694034 kubelet[2520]: I0416 03:16:12.666655 2520 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 03:16:12.708960 kubelet[2520]: E0416 03:16:12.666997 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:12.708960 kubelet[2520]: I0416 03:16:12.708503 2520 reconciler.go:26] "Reconciler: start to sync state" Apr 16 03:16:12.714052 kubelet[2520]: I0416 03:16:12.713587 2520 server.go:317] "Adding debug handlers to kubelet server" Apr 16 03:16:12.743641 kubelet[2520]: E0416 03:16:12.743486 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 03:16:12.752494 kubelet[2520]: I0416 03:16:12.746665 2520 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 03:16:12.795788 kubelet[2520]: E0416 03:16:12.789977 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Apr 16 03:16:12.826014 kubelet[2520]: E0416 03:16:12.807993 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:12.826014 kubelet[2520]: E0416 03:16:12.808368 2520 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:16:12.913223 kubelet[2520]: E0416 03:16:12.911288 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:12.941406 kubelet[2520]: I0416 03:16:12.941050 2520 factory.go:223] Registration of the systemd container factory successfully Apr 16 03:16:12.967806 kubelet[2520]: I0416 03:16:12.965663 2520 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 03:16:13.056871 kubelet[2520]: E0416 03:16:13.056296 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Apr 16 03:16:13.056871 kubelet[2520]: E0416 03:16:13.056471 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:13.160926 kubelet[2520]: E0416 03:16:13.159756 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:13.257442 kubelet[2520]: E0416 03:16:13.255871 2520 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 03:16:13.275956 kubelet[2520]: E0416 03:16:13.271987 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:13.381018 kubelet[2520]: E0416 03:16:13.379480 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:13.480950 kubelet[2520]: I0416 03:16:13.480142 2520 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 03:16:13.531329 kubelet[2520]: I0416 03:16:13.481620 2520 factory.go:223] Registration of the containerd container factory successfully Apr 16 03:16:13.531329 kubelet[2520]: E0416 03:16:13.527210 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Apr 16 03:16:13.556349 kubelet[2520]: E0416 03:16:13.555760 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:13.670066 kubelet[2520]: I0416 03:16:13.669338 2520 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 03:16:13.677724 kubelet[2520]: I0416 03:16:13.677444 2520 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 03:16:13.679784 kubelet[2520]: I0416 03:16:13.679661 2520 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 03:16:13.679784 kubelet[2520]: I0416 03:16:13.679745 2520 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 03:16:13.679908 kubelet[2520]: E0416 03:16:13.679855 2520 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 03:16:13.681719 kubelet[2520]: E0416 03:16:13.670133 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 03:16:13.681719 kubelet[2520]: E0416 03:16:13.670549 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 03:16:13.704776 kubelet[2520]: E0416 03:16:13.704561 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:13.749964 kubelet[2520]: E0416 03:16:13.749800 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 03:16:13.790344 kubelet[2520]: E0416 03:16:13.786382 2520 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 03:16:13.815274 kubelet[2520]: E0416 03:16:13.810610 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:13.929997 kubelet[2520]: E0416 03:16:13.926582 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.059708 kubelet[2520]: E0416 03:16:14.056145 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.066522 kubelet[2520]: E0416 03:16:14.066007 2520 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:16:14.092297 kubelet[2520]: E0416 03:16:14.092177 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 03:16:14.164561 kubelet[2520]: E0416 03:16:14.163302 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.269374 kubelet[2520]: E0416 03:16:14.268908 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.348493 kubelet[2520]: E0416 03:16:14.348230 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Apr 16 03:16:14.376042 kubelet[2520]: E0416 03:16:14.375350 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.480082 kubelet[2520]: E0416 03:16:14.478208 2520 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:16:14.480082 kubelet[2520]: E0416 03:16:14.478808 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.586011 kubelet[2520]: E0416 03:16:14.583578 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.714096 kubelet[2520]: E0416 03:16:14.712350 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.824739 kubelet[2520]: E0416 03:16:14.820548 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.941096 kubelet[2520]: E0416 03:16:14.940108 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:14.944478 kubelet[2520]: I0416 03:16:14.943898 2520 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 03:16:14.946862 kubelet[2520]: I0416 03:16:14.945672 2520 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 03:16:14.947207 kubelet[2520]: I0416 03:16:14.947160 2520 state_mem.go:36] "Initialized new in-memory state store" Apr 16 03:16:15.044438 kubelet[2520]: E0416 03:16:15.043332 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:15.089554 kubelet[2520]: I0416 03:16:15.081423 2520 policy_none.go:49] "None policy: Start" Apr 16 03:16:15.134308 kubelet[2520]: I0416 03:16:15.127499 2520 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 03:16:15.154125 kubelet[2520]: I0416 03:16:15.151887 2520 state_mem.go:35] "Initializing new in-memory state store" Apr 16 03:16:15.156604 kubelet[2520]: E0416 03:16:15.155087 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:15.268226 kubelet[2520]: E0416 03:16:15.267615 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:15.283114 kubelet[2520]: E0416 03:16:15.282467 2520 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:16:15.354555 kubelet[2520]: E0416 03:16:15.353645 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 03:16:15.372475 kubelet[2520]: E0416 03:16:15.372158 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:16:15.393155 kubelet[2520]: E0416 03:16:15.390430 2520 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 03:16:15.399249 kubelet[2520]: I0416 03:16:15.398385 2520 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 03:16:15.404238 kubelet[2520]: I0416 03:16:15.403627 2520 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 03:16:15.429835 kubelet[2520]: I0416 03:16:15.429762 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 03:16:15.526239 kubelet[2520]: E0416 03:16:15.524595 2520 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 03:16:15.526239 kubelet[2520]: E0416 03:16:15.526202 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:16:15.602013 kubelet[2520]: I0416 03:16:15.601594 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:16:15.655949 kubelet[2520]: E0416 03:16:15.654975 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 03:16:16.018103 kubelet[2520]: E0416 03:16:16.018020 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="3.2s" Apr 16 03:16:16.033918 kubelet[2520]: I0416 03:16:16.033639 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:16:16.046754 kubelet[2520]: E0416 03:16:16.046061 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 03:16:16.133097 kubelet[2520]: E0416 03:16:16.131792 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 03:16:16.239769 kubelet[2520]: E0416 03:16:16.238973 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 03:16:16.750119 kubelet[2520]: I0416 03:16:16.744600 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:16:16.824839 kubelet[2520]: E0416 03:16:16.814488 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 03:16:16.970922 kubelet[2520]: E0416 03:16:16.970532 2520 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:16:17.001893 kubelet[2520]: E0416 03:16:16.983344 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 03:16:17.078861 kubelet[2520]: I0416 03:16:17.072060 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08e6e58b5c66a8e05059ea871273285b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"08e6e58b5c66a8e05059ea871273285b\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:16:17.091083 kubelet[2520]: I0416 03:16:17.088942 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08e6e58b5c66a8e05059ea871273285b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"08e6e58b5c66a8e05059ea871273285b\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:16:17.091083 kubelet[2520]: I0416 03:16:17.089286 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08e6e58b5c66a8e05059ea871273285b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"08e6e58b5c66a8e05059ea871273285b\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:16:17.463144 kubelet[2520]: I0416 03:16:17.462343 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:16:17.464358 kubelet[2520]: I0416 03:16:17.463305 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:16:17.464358 kubelet[2520]: I0416 03:16:17.463333 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:16:17.464358 kubelet[2520]: I0416 03:16:17.463508 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:16:17.464358 kubelet[2520]: I0416 03:16:17.463531 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:16:17.601866 kubelet[2520]: E0416 03:16:17.597922 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:17.610726 kubelet[2520]: E0416 03:16:17.607465 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:17.652260 kubelet[2520]: E0416 03:16:17.644866 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:17.661506 containerd[1577]: time="2026-04-16T03:16:17.643888571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:08e6e58b5c66a8e05059ea871273285b,Namespace:kube-system,Attempt:0,}" Apr 16 03:16:17.978681 kubelet[2520]: I0416 03:16:17.978119 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:16:17.978681 kubelet[2520]: E0416 03:16:17.980526 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:17.987631 kubelet[2520]: I0416 03:16:17.987286 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 03:16:17.993614 kubelet[2520]: E0416 03:16:17.993509 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 03:16:17.995071 kubelet[2520]: E0416 03:16:17.994174 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 03:16:18.015627 containerd[1577]: time="2026-04-16T03:16:18.015497739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 16 03:16:18.379054 kubelet[2520]: E0416 03:16:18.376936 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:18.392928 kubelet[2520]: E0416 03:16:18.389516 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:18.402648 containerd[1577]: time="2026-04-16T03:16:18.402242795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 16 03:16:19.349776 kubelet[2520]: E0416 03:16:19.334395 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="6.4s" Apr 16 03:16:19.800417 kubelet[2520]: I0416 03:16:19.798330 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:16:19.858421 kubelet[2520]: E0416 03:16:19.853622 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 03:16:21.084360 kubelet[2520]: E0416 03:16:21.071788 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b7f68b0eee42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,LastTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:16:21.158223 kubelet[2520]: E0416 03:16:21.158119 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 03:16:21.894940 kubelet[2520]: E0416 03:16:21.891516 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 03:16:21.970477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331140025.mount: Deactivated successfully. Apr 16 03:16:22.073757 containerd[1577]: time="2026-04-16T03:16:22.069191875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:16:22.169908 containerd[1577]: time="2026-04-16T03:16:22.169335051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 16 03:16:22.272994 containerd[1577]: time="2026-04-16T03:16:22.272440984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:16:22.367627 containerd[1577]: time="2026-04-16T03:16:22.366251083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 03:16:22.440321 containerd[1577]: time="2026-04-16T03:16:22.438355079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 03:16:22.457664 containerd[1577]: time="2026-04-16T03:16:22.457449200Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:16:22.539553 kubelet[2520]: E0416 03:16:22.539352 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 03:16:22.672486 containerd[1577]: time="2026-04-16T03:16:22.670814910Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:16:23.315029 kubelet[2520]: I0416 03:16:23.313529 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:16:23.425939 kubelet[2520]: E0416 03:16:23.420114 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 03:16:23.900749 kubelet[2520]: E0416 03:16:23.862527 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 03:16:24.512908 containerd[1577]: time="2026-04-16T03:16:24.510112662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:16:25.275115 kubelet[2520]: E0416 03:16:25.274577 2520 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:16:25.528030 containerd[1577]: time="2026-04-16T03:16:25.527654409Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 7.116000984s" Apr 16 03:16:25.529484 kubelet[2520]: E0416 03:16:25.528986 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:16:25.671337 containerd[1577]: time="2026-04-16T03:16:25.670193608Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 7.643138409s" Apr 16 03:16:25.711800 containerd[1577]: time="2026-04-16T03:16:25.709743710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 8.044774309s" Apr 16 03:16:25.842220 kubelet[2520]: E0416 03:16:25.840807 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="7s" Apr 16 03:16:26.550864 containerd[1577]: time="2026-04-16T03:16:26.550075408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 03:16:26.550864 containerd[1577]: time="2026-04-16T03:16:26.550149796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 03:16:26.550864 containerd[1577]: time="2026-04-16T03:16:26.550174901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:16:26.550864 containerd[1577]: time="2026-04-16T03:16:26.550311922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:16:26.880458 containerd[1577]: time="2026-04-16T03:16:26.758469073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 03:16:26.880458 containerd[1577]: time="2026-04-16T03:16:26.758937869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 03:16:26.880458 containerd[1577]: time="2026-04-16T03:16:26.758955374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:16:26.889469 containerd[1577]: time="2026-04-16T03:16:26.868318976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:16:26.910380 containerd[1577]: time="2026-04-16T03:16:26.907952569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 03:16:26.910380 containerd[1577]: time="2026-04-16T03:16:26.908034778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 03:16:26.910380 containerd[1577]: time="2026-04-16T03:16:26.908055807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:16:26.940138 containerd[1577]: time="2026-04-16T03:16:26.934981174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:16:28.379161 containerd[1577]: time="2026-04-16T03:16:28.378953391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"a791794799c2eb71bbf5dbd6aa904b5d3616a59f833e06189a9e19ba204c2054\"" Apr 16 03:16:28.380374 containerd[1577]: time="2026-04-16T03:16:28.379878479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"be895fc5e9228431b1501707a1b823edfeed79e54eb046b127ca9c5c84ab610b\"" Apr 16 03:16:28.380374 containerd[1577]: time="2026-04-16T03:16:28.379038328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:08e6e58b5c66a8e05059ea871273285b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d5cbf92edab9d03734273fb025fae3c283d6c48a79437a1901708e639f309f3\"" Apr 16 03:16:28.473006 kubelet[2520]: E0416 03:16:28.470575 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:28.473006 kubelet[2520]: E0416 03:16:28.470624 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:28.473006 kubelet[2520]: E0416 03:16:28.470789 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:28.861743 containerd[1577]: time="2026-04-16T03:16:28.859453545Z" level=info msg="CreateContainer within sandbox \"be895fc5e9228431b1501707a1b823edfeed79e54eb046b127ca9c5c84ab610b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 03:16:28.870814 containerd[1577]: time="2026-04-16T03:16:28.869934599Z" level=info msg="CreateContainer within sandbox \"a791794799c2eb71bbf5dbd6aa904b5d3616a59f833e06189a9e19ba204c2054\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 03:16:28.881867 containerd[1577]: time="2026-04-16T03:16:28.881491570Z" level=info msg="CreateContainer within sandbox \"6d5cbf92edab9d03734273fb025fae3c283d6c48a79437a1901708e639f309f3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 03:16:29.270389 containerd[1577]: time="2026-04-16T03:16:29.270312866Z" level=info msg="CreateContainer within sandbox \"be895fc5e9228431b1501707a1b823edfeed79e54eb046b127ca9c5c84ab610b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5917193b1fe87594a4686bf30bfc167f60f5db94525bd9c7b2205557ecfad6cb\"" Apr 16 03:16:29.278598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3868078657.mount: Deactivated successfully. Apr 16 03:16:29.376761 containerd[1577]: time="2026-04-16T03:16:29.375198650Z" level=info msg="CreateContainer within sandbox \"6d5cbf92edab9d03734273fb025fae3c283d6c48a79437a1901708e639f309f3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"93e9ee17c551ec50e3d0db38fdc8c6c5f9590fd130522e18f353deb711b04c7f\"" Apr 16 03:16:29.389186 containerd[1577]: time="2026-04-16T03:16:29.388039230Z" level=info msg="CreateContainer within sandbox \"a791794799c2eb71bbf5dbd6aa904b5d3616a59f833e06189a9e19ba204c2054\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"db41aac0b1005203ebaf8aabe0a3afb429a76d42324512e8c7575cebbb77bece\"" Apr 16 03:16:29.407563 containerd[1577]: time="2026-04-16T03:16:29.407216459Z" level=info msg="StartContainer for \"5917193b1fe87594a4686bf30bfc167f60f5db94525bd9c7b2205557ecfad6cb\"" Apr 16 03:16:29.424961 containerd[1577]: time="2026-04-16T03:16:29.423289336Z" level=info msg="StartContainer for \"93e9ee17c551ec50e3d0db38fdc8c6c5f9590fd130522e18f353deb711b04c7f\"" Apr 16 03:16:29.457583 containerd[1577]: time="2026-04-16T03:16:29.454044885Z" level=info msg="StartContainer for \"db41aac0b1005203ebaf8aabe0a3afb429a76d42324512e8c7575cebbb77bece\"" Apr 16 03:16:30.400468 kubelet[2520]: I0416 03:16:30.397523 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:16:30.466673 kubelet[2520]: E0416 03:16:30.464277 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 16 03:16:30.475932 kubelet[2520]: E0416 03:16:30.470110 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 03:16:32.317990 kubelet[2520]: E0416 03:16:32.314414 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 03:16:32.527776 containerd[1577]: time="2026-04-16T03:16:32.525305071Z" level=info msg="StartContainer for \"93e9ee17c551ec50e3d0db38fdc8c6c5f9590fd130522e18f353deb711b04c7f\" returns successfully" Apr 16 03:16:33.107767 containerd[1577]: time="2026-04-16T03:16:33.094129889Z" level=info msg="StartContainer for \"5917193b1fe87594a4686bf30bfc167f60f5db94525bd9c7b2205557ecfad6cb\" returns successfully" Apr 16 03:16:34.262862 containerd[1577]: time="2026-04-16T03:16:34.174479907Z" level=info msg="StartContainer for \"db41aac0b1005203ebaf8aabe0a3afb429a76d42324512e8c7575cebbb77bece\" returns successfully" Apr 16 03:16:34.511955 kubelet[2520]: E0416 03:16:32.518416 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b7f68b0eee42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,LastTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:16:34.551251 kubelet[2520]: E0416 03:16:34.522354 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="7s" Apr 16 03:16:36.176381 kubelet[2520]: E0416 03:16:36.170493 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:16:37.857096 kubelet[2520]: E0416 03:16:37.853278 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:37.974644 kubelet[2520]: E0416 03:16:37.974544 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:38.003803 kubelet[2520]: I0416 03:16:37.998445 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:16:38.268874 kubelet[2520]: E0416 03:16:38.234554 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:38.411286 kubelet[2520]: E0416 03:16:38.408363 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:40.226052 kubelet[2520]: E0416 03:16:40.225739 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:40.288220 kubelet[2520]: E0416 03:16:40.287874 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:40.442003 kubelet[2520]: E0416 03:16:40.441769 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:40.443411 kubelet[2520]: E0416 03:16:40.443245 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:40.747855 kubelet[2520]: E0416 03:16:40.746576 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:40.849895 kubelet[2520]: E0416 03:16:40.847793 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:42.226051 kubelet[2520]: E0416 03:16:42.224661 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:42.282499 kubelet[2520]: E0416 03:16:42.280988 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:42.412433 kubelet[2520]: E0416 03:16:42.411664 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:42.439249 kubelet[2520]: E0416 03:16:42.436620 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:42.609622 kubelet[2520]: E0416 03:16:42.593914 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:42.947604 kubelet[2520]: E0416 03:16:42.795605 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:44.368803 kubelet[2520]: E0416 03:16:44.361819 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:44.408534 kubelet[2520]: E0416 03:16:44.406341 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:44.777378 kubelet[2520]: E0416 03:16:44.763485 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 03:16:44.794441 kubelet[2520]: E0416 03:16:44.791199 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 03:16:46.312849 kubelet[2520]: E0416 03:16:46.310475 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:16:46.734289 kubelet[2520]: E0416 03:16:46.729387 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:46.760829 kubelet[2520]: E0416 03:16:46.760515 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:46.773146 kubelet[2520]: E0416 03:16:46.770657 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:46.863505 kubelet[2520]: E0416 03:16:46.861225 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:48.105931 kubelet[2520]: E0416 03:16:48.093866 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 03:16:51.244984 kubelet[2520]: E0416 03:16:51.243039 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:16:51.378769 kubelet[2520]: E0416 03:16:51.375669 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:16:51.577339 kubelet[2520]: E0416 03:16:51.573024 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 03:16:52.578583 kubelet[2520]: E0416 03:16:52.574316 2520 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:16:52.588555 kubelet[2520]: E0416 03:16:52.587710 2520 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:16:54.674207 kubelet[2520]: E0416 03:16:54.652413 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b7f68b0eee42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,LastTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:16:55.377153 kubelet[2520]: E0416 03:16:55.377029 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 03:16:55.467823 kubelet[2520]: I0416 03:16:55.467053 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:16:56.404288 kubelet[2520]: E0416 03:16:56.396128 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:17:00.210326 kubelet[2520]: E0416 03:17:00.206196 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:17:00.242130 kubelet[2520]: E0416 03:17:00.241136 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:17:00.418004 kubelet[2520]: E0416 03:17:00.377928 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 03:17:05.646102 kubelet[2520]: E0416 03:17:05.645889 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 03:17:06.749282 kubelet[2520]: E0416 03:17:06.748381 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:17:08.665188 kubelet[2520]: E0416 03:17:08.660074 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 03:17:12.938611 kubelet[2520]: I0416 03:17:12.932561 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:17:14.818039 kubelet[2520]: E0416 03:17:14.781246 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 03:17:14.818039 kubelet[2520]: E0416 03:17:14.867202 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b7f68b0eee42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,LastTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:17:16.957427 kubelet[2520]: E0416 03:17:16.949052 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:17:17.617213 kubelet[2520]: E0416 03:17:17.575578 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 03:17:23.001402 kubelet[2520]: E0416 03:17:22.999171 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 03:17:25.943018 kubelet[2520]: E0416 03:17:25.940492 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 03:17:27.021004 kubelet[2520]: E0416 03:17:26.978224 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:17:30.351881 kubelet[2520]: I0416 03:17:30.350652 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:17:34.925027 kubelet[2520]: E0416 03:17:34.921835 2520 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:17:34.953882 kubelet[2520]: E0416 03:17:34.921586 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b7f68b0eee42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,LastTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:17:37.073963 kubelet[2520]: E0416 03:17:37.066334 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:17:40.438486 kubelet[2520]: E0416 03:17:40.429380 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 03:17:42.074759 update_engine[1560]: I20260416 03:17:42.074051 1560 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 03:17:42.135231 update_engine[1560]: I20260416 03:17:42.078017 1560 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 03:17:42.135231 update_engine[1560]: I20260416 03:17:42.081073 1560 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 03:17:42.148846 update_engine[1560]: I20260416 03:17:42.146078 1560 omaha_request_params.cc:62] Current group set to lts Apr 16 03:17:42.148846 update_engine[1560]: I20260416 03:17:42.147264 1560 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 03:17:42.148846 update_engine[1560]: I20260416 03:17:42.147289 1560 update_attempter.cc:643] Scheduling an action processor start. Apr 16 03:17:42.148846 update_engine[1560]: I20260416 03:17:42.147310 1560 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 03:17:42.148846 update_engine[1560]: I20260416 03:17:42.147599 1560 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 03:17:42.148846 update_engine[1560]: I20260416 03:17:42.147894 1560 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 03:17:42.148846 update_engine[1560]: I20260416 03:17:42.147904 1560 omaha_request_action.cc:272] Request: Apr 16 03:17:42.148846 update_engine[1560]: Apr 16 03:17:42.148846 update_engine[1560]: Apr 16 03:17:42.148846 update_engine[1560]: Apr 16 03:17:42.148846 update_engine[1560]: Apr 16 03:17:42.148846 update_engine[1560]: Apr 16 03:17:42.148846 update_engine[1560]: Apr 16 03:17:42.148846 update_engine[1560]: Apr 16 03:17:42.148846 update_engine[1560]: Apr 16 03:17:42.148846 update_engine[1560]: I20260416 03:17:42.147910 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:17:42.152551 locksmithd[1615]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 03:17:42.170197 update_engine[1560]: I20260416 03:17:42.166627 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:17:42.186474 update_engine[1560]: I20260416 03:17:42.177459 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:17:42.197219 update_engine[1560]: E20260416 03:17:42.194160 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:17:42.202143 update_engine[1560]: I20260416 03:17:42.201431 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 03:17:42.984122 kubelet[2520]: E0416 03:17:42.981847 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 16 03:17:47.119542 kubelet[2520]: E0416 03:17:47.111573 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:17:47.975214 kubelet[2520]: I0416 03:17:47.969258 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:17:51.200177 kubelet[2520]: E0416 03:17:51.197894 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 03:17:52.082131 update_engine[1560]: I20260416 03:17:52.079991 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:17:52.121983 update_engine[1560]: I20260416 03:17:52.121572 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:17:52.126325 update_engine[1560]: I20260416 03:17:52.125954 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:17:52.147830 update_engine[1560]: E20260416 03:17:52.146879 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:17:52.155616 update_engine[1560]: I20260416 03:17:52.154537 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 03:17:52.562513 kubelet[2520]: E0416 03:17:52.560111 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 03:17:54.131588 kubelet[2520]: E0416 03:17:54.130983 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 03:17:55.284182 kubelet[2520]: E0416 03:17:55.281035 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b7f68b0eee42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,LastTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:17:57.179423 kubelet[2520]: E0416 03:17:57.177415 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:17:58.057848 kubelet[2520]: E0416 03:17:58.055605 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 03:18:00.023613 kubelet[2520]: E0416 03:17:59.984730 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 03:18:00.107651 kubelet[2520]: E0416 03:18:00.072117 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 03:18:02.080636 update_engine[1560]: I20260416 03:18:02.078519 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:18:02.094433 update_engine[1560]: I20260416 03:18:02.093429 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:18:02.094433 update_engine[1560]: I20260416 03:18:02.094376 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:18:02.109137 update_engine[1560]: E20260416 03:18:02.108613 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:18:02.109137 update_engine[1560]: I20260416 03:18:02.109031 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 03:18:05.246359 kubelet[2520]: I0416 03:18:05.243352 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:18:06.801210 kubelet[2520]: E0416 03:18:06.800991 2520 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:18:07.347906 kubelet[2520]: E0416 03:18:07.339329 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:18:12.101026 update_engine[1560]: I20260416 03:18:12.083993 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:18:12.130115 update_engine[1560]: I20260416 03:18:12.124614 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:18:12.130428 update_engine[1560]: I20260416 03:18:12.130319 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:18:12.140725 update_engine[1560]: E20260416 03:18:12.140388 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:18:12.140725 update_engine[1560]: I20260416 03:18:12.140789 1560 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 03:18:12.140725 update_engine[1560]: I20260416 03:18:12.140806 1560 omaha_request_action.cc:617] Omaha request response: Apr 16 03:18:12.145093 update_engine[1560]: E20260416 03:18:12.141407 1560 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 03:18:12.145093 update_engine[1560]: I20260416 03:18:12.144294 1560 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 03:18:12.145093 update_engine[1560]: I20260416 03:18:12.144506 1560 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 03:18:12.145093 update_engine[1560]: I20260416 03:18:12.144517 1560 update_attempter.cc:306] Processing Done. Apr 16 03:18:12.145093 update_engine[1560]: E20260416 03:18:12.144616 1560 update_attempter.cc:619] Update failed. Apr 16 03:18:12.145093 update_engine[1560]: I20260416 03:18:12.144738 1560 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 03:18:12.145093 update_engine[1560]: I20260416 03:18:12.144747 1560 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 03:18:12.145093 update_engine[1560]: I20260416 03:18:12.144752 1560 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 03:18:12.145715 update_engine[1560]: I20260416 03:18:12.145653 1560 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 03:18:12.145828 update_engine[1560]: I20260416 03:18:12.145788 1560 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 03:18:12.145828 update_engine[1560]: I20260416 03:18:12.145807 1560 omaha_request_action.cc:272] Request: Apr 16 03:18:12.145828 update_engine[1560]: Apr 16 03:18:12.145828 update_engine[1560]: Apr 16 03:18:12.145828 update_engine[1560]: Apr 16 03:18:12.145828 update_engine[1560]: Apr 16 03:18:12.145828 update_engine[1560]: Apr 16 03:18:12.145828 update_engine[1560]: Apr 16 03:18:12.145828 update_engine[1560]: I20260416 03:18:12.145814 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:18:12.146141 update_engine[1560]: I20260416 03:18:12.146101 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:18:12.147933 update_engine[1560]: I20260416 03:18:12.147718 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:18:12.149013 locksmithd[1615]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 03:18:12.159035 update_engine[1560]: E20260416 03:18:12.158653 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:18:12.159035 update_engine[1560]: I20260416 03:18:12.159113 1560 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 03:18:12.159035 update_engine[1560]: I20260416 03:18:12.159126 1560 omaha_request_action.cc:617] Omaha request response: Apr 16 03:18:12.159035 update_engine[1560]: I20260416 03:18:12.159135 1560 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 03:18:12.159035 update_engine[1560]: I20260416 03:18:12.159142 1560 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 03:18:12.159035 update_engine[1560]: I20260416 03:18:12.159147 1560 update_attempter.cc:306] Processing Done. Apr 16 03:18:12.160287 update_engine[1560]: I20260416 03:18:12.159216 1560 update_attempter.cc:310] Error event sent. Apr 16 03:18:12.160287 update_engine[1560]: I20260416 03:18:12.159277 1560 update_check_scheduler.cc:74] Next update check in 44m33s Apr 16 03:18:12.161927 locksmithd[1615]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 03:18:13.040922 kubelet[2520]: E0416 03:18:13.038980 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:18:13.163581 kubelet[2520]: E0416 03:18:13.162819 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:18:15.366854 kubelet[2520]: E0416 03:18:15.324152 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b7f68b0eee42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,LastTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:18:15.441838 kubelet[2520]: E0416 03:18:15.383048 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 03:18:17.219022 kubelet[2520]: E0416 03:18:17.216266 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 03:18:17.382594 kubelet[2520]: E0416 03:18:17.381943 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:18:17.641099 kubelet[2520]: E0416 03:18:17.563602 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:18:17.718568 kubelet[2520]: E0416 03:18:17.717447 2520 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:18:17.844881 kubelet[2520]: E0416 03:18:17.844712 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:18:17.902639 kubelet[2520]: E0416 03:18:17.898298 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:18:22.817081 kubelet[2520]: I0416 03:18:22.815218 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:18:27.521678 kubelet[2520]: E0416 03:18:27.514164 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:18:32.913763 kubelet[2520]: E0416 03:18:32.899294 2520 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 03:18:33.254217 kubelet[2520]: E0416 03:18:33.172261 2520 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 03:18:34.375720 kubelet[2520]: E0416 03:18:34.372784 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 16 03:18:35.446435 kubelet[2520]: E0416 03:18:35.416739 2520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b7f68b0eee42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,LastTimestamp:2026-04-16 03:16:12.567088706 +0000 UTC m=+8.808851333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:18:37.564022 kubelet[2520]: E0416 03:18:37.557511 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:18:38.841281 kubelet[2520]: E0416 03:18:38.838581 2520 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:18:40.450644 kubelet[2520]: I0416 03:18:40.448649 2520 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:18:47.754520 kubelet[2520]: E0416 03:18:47.752661 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:18:47.817533 kubelet[2520]: I0416 03:18:47.649811 2520 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 03:18:48.104206 kubelet[2520]: E0416 03:18:48.049150 2520 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 03:18:51.270299 kubelet[2520]: E0416 03:18:51.269539 2520 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:18:51.501819 kubelet[2520]: E0416 03:18:51.496440 2520 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 16 03:18:54.012824 kubelet[2520]: E0416 03:18:54.011634 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:18:57.877361 kubelet[2520]: E0416 03:18:57.873587 2520 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:18:59.590775 kubelet[2520]: E0416 03:18:59.588432 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:01.748100 kubelet[2520]: E0416 03:19:01.726869 2520 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 03:19:05.152450 kubelet[2520]: E0416 03:19:05.151523 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:07.841565 kubelet[2520]: I0416 03:19:07.840547 2520 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 03:19:08.026856 kubelet[2520]: I0416 03:19:08.026605 2520 apiserver.go:52] "Watching apiserver" Apr 16 03:19:09.147238 kubelet[2520]: I0416 03:19:09.146465 2520 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 03:19:09.617515 kubelet[2520]: E0416 03:19:09.617024 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:19:09.624477 kubelet[2520]: I0416 03:19:09.601570 2520 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 03:19:10.154281 kubelet[2520]: I0416 03:19:10.153468 2520 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 03:19:10.159134 kubelet[2520]: E0416 03:19:10.155592 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:19:10.628924 kubelet[2520]: E0416 03:19:10.627293 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:10.635946 kubelet[2520]: E0416 03:19:10.630600 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:19:13.264076 kubelet[2520]: E0416 03:19:13.263916 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.269s" Apr 16 03:19:15.364862 kubelet[2520]: E0416 03:19:15.245613 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.304s" Apr 16 03:19:15.796869 kubelet[2520]: E0416 03:19:15.796810 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:16.926930 kubelet[2520]: E0416 03:19:16.918857 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.14s" Apr 16 03:19:17.527821 kubelet[2520]: I0416 03:19:17.524595 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.524376583 podStartE2EDuration="8.524376583s" podCreationTimestamp="2026-04-16 03:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 03:19:16.353888571 +0000 UTC m=+192.595651200" watchObservedRunningTime="2026-04-16 03:19:17.524376583 +0000 UTC m=+193.766139215" Apr 16 03:19:18.655893 kubelet[2520]: I0416 03:19:18.636097 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.635678588 podStartE2EDuration="8.635678588s" podCreationTimestamp="2026-04-16 03:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 03:19:17.530540504 +0000 UTC m=+193.772303133" watchObservedRunningTime="2026-04-16 03:19:18.635678588 +0000 UTC m=+194.877441217" Apr 16 03:19:18.661512 kubelet[2520]: I0416 03:19:18.660868 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.660051548 podStartE2EDuration="8.660051548s" podCreationTimestamp="2026-04-16 03:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 03:19:18.634981996 +0000 UTC m=+194.876744632" watchObservedRunningTime="2026-04-16 03:19:18.660051548 +0000 UTC m=+194.901814176" Apr 16 03:19:20.720547 kubelet[2520]: E0416 03:19:20.720395 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.014s" Apr 16 03:19:21.511894 kubelet[2520]: E0416 03:19:21.502552 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:25.122200 kubelet[2520]: E0416 03:19:25.117602 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.272s" Apr 16 03:19:26.960333 kubelet[2520]: E0416 03:19:26.960245 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:32.449628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5917193b1fe87594a4686bf30bfc167f60f5db94525bd9c7b2205557ecfad6cb-rootfs.mount: Deactivated successfully. Apr 16 03:19:32.854299 containerd[1577]: time="2026-04-16T03:19:32.843482560Z" level=info msg="shim disconnected" id=5917193b1fe87594a4686bf30bfc167f60f5db94525bd9c7b2205557ecfad6cb namespace=k8s.io Apr 16 03:19:32.854299 containerd[1577]: time="2026-04-16T03:19:32.844495186Z" level=warning msg="cleaning up after shim disconnected" id=5917193b1fe87594a4686bf30bfc167f60f5db94525bd9c7b2205557ecfad6cb namespace=k8s.io Apr 16 03:19:32.854299 containerd[1577]: time="2026-04-16T03:19:32.844571682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 03:19:34.021645 kubelet[2520]: E0416 03:19:33.690556 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:34.683725 kubelet[2520]: E0416 03:19:34.683289 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.911s" Apr 16 03:19:36.494922 kubelet[2520]: I0416 03:19:36.490042 2520 scope.go:117] "RemoveContainer" containerID="5917193b1fe87594a4686bf30bfc167f60f5db94525bd9c7b2205557ecfad6cb" Apr 16 03:19:36.521753 kubelet[2520]: E0416 03:19:36.520596 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:19:37.377999 containerd[1577]: time="2026-04-16T03:19:37.377932645Z" level=info msg="CreateContainer within sandbox \"be895fc5e9228431b1501707a1b823edfeed79e54eb046b127ca9c5c84ab610b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 03:19:39.996916 kubelet[2520]: E0416 03:19:39.996528 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:40.352806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570048238.mount: Deactivated successfully. Apr 16 03:19:40.543875 kubelet[2520]: E0416 03:19:40.539642 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.561s" Apr 16 03:19:40.756267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208359156.mount: Deactivated successfully. Apr 16 03:19:41.526833 containerd[1577]: time="2026-04-16T03:19:41.458351124Z" level=info msg="CreateContainer within sandbox \"be895fc5e9228431b1501707a1b823edfeed79e54eb046b127ca9c5c84ab610b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"178d3df52882fa29fd1d924de8f4fcd7e5c5325b2f184e4eea06d869889a3dda\"" Apr 16 03:19:42.158939 kubelet[2520]: E0416 03:19:42.156058 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.614s" Apr 16 03:19:42.681348 containerd[1577]: time="2026-04-16T03:19:42.681026138Z" level=info msg="StartContainer for \"178d3df52882fa29fd1d924de8f4fcd7e5c5325b2f184e4eea06d869889a3dda\"" Apr 16 03:19:43.266485 kubelet[2520]: E0416 03:19:43.266432 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.106s" Apr 16 03:19:45.479751 kubelet[2520]: E0416 03:19:45.477134 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.622s" Apr 16 03:19:45.660268 kubelet[2520]: E0416 03:19:45.659847 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:46.529965 containerd[1577]: time="2026-04-16T03:19:46.497411448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 03:19:46.529965 containerd[1577]: time="2026-04-16T03:19:46.510157814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 03:19:46.529965 containerd[1577]: time="2026-04-16T03:19:46.510450655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:19:46.573764 containerd[1577]: time="2026-04-16T03:19:46.557870541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:19:48.515798 kubelet[2520]: E0416 03:19:48.515452 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.657s" Apr 16 03:19:49.192679 kubelet[2520]: E0416 03:19:49.192631 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:19:52.248571 kubelet[2520]: E0416 03:19:52.241427 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:52.712815 kubelet[2520]: E0416 03:19:52.704942 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.188s" Apr 16 03:19:53.852265 kubelet[2520]: E0416 03:19:53.843974 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:19:55.362757 containerd[1577]: time="2026-04-16T03:19:55.359821021Z" level=info msg="StartContainer for \"178d3df52882fa29fd1d924de8f4fcd7e5c5325b2f184e4eea06d869889a3dda\" returns successfully" Apr 16 03:19:58.561226 kubelet[2520]: E0416 03:19:58.549484 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:19:59.016900 kubelet[2520]: E0416 03:19:59.016848 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.277s" Apr 16 03:20:01.005145 kubelet[2520]: E0416 03:20:01.004636 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:01.672370 kubelet[2520]: E0416 03:20:01.672232 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.513s" Apr 16 03:20:02.946346 kubelet[2520]: E0416 03:20:02.946044 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:02.990626 kubelet[2520]: E0416 03:20:02.966411 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.251s" Apr 16 03:20:04.290733 kubelet[2520]: E0416 03:20:04.290179 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:20:07.513936 kubelet[2520]: E0416 03:20:07.508397 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.828s" Apr 16 03:20:08.840648 systemd[1]: Reloading requested from client PID 2898 ('systemctl') (unit session-7.scope)... Apr 16 03:20:08.851916 systemd[1]: Reloading... Apr 16 03:20:09.150611 kubelet[2520]: E0416 03:20:09.119767 2520 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.375s" Apr 16 03:20:09.822354 kubelet[2520]: E0416 03:20:09.822216 2520 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:20:10.462717 kubelet[2520]: E0416 03:20:10.462551 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:10.509755 kubelet[2520]: E0416 03:20:10.463545 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:10.667877 zram_generator::config[2937]: No configuration found. Apr 16 03:20:11.619784 kubelet[2520]: E0416 03:20:11.591331 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:12.620837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 03:20:13.743500 systemd[1]: Reloading finished in 4883 ms. Apr 16 03:20:14.177203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:20:14.487548 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 03:20:14.531432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:20:14.715132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:20:17.435523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:20:17.470828 (kubelet)[2996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 03:20:21.006785 kubelet[2996]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 03:20:21.028924 kubelet[2996]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 03:20:21.028924 kubelet[2996]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 03:20:21.032148 kubelet[2996]: I0416 03:20:21.031743 2996 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 03:20:22.551662 kubelet[2996]: I0416 03:20:22.549677 2996 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 03:20:22.551662 kubelet[2996]: I0416 03:20:22.549910 2996 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 03:20:22.673523 kubelet[2996]: I0416 03:20:22.673437 2996 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 03:20:23.308835 kubelet[2996]: I0416 03:20:23.280666 2996 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 03:20:23.491567 kubelet[2996]: I0416 03:20:23.488496 2996 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 03:20:25.473064 kubelet[2996]: E0416 03:20:25.463418 2996 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 03:20:25.530372 kubelet[2996]: I0416 03:20:25.470279 2996 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 03:20:25.894459 kubelet[2996]: I0416 03:20:25.888548 2996 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 03:20:25.903280 kubelet[2996]: I0416 03:20:25.899437 2996 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 03:20:26.133251 kubelet[2996]: I0416 03:20:25.905783 2996 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 16 03:20:26.146563 kubelet[2996]: I0416 03:20:26.145463 2996 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 03:20:26.151379 kubelet[2996]: I0416 03:20:26.150394 2996 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 03:20:26.169284 kubelet[2996]: I0416 03:20:26.168351 2996 state_mem.go:36] "Initialized new in-memory state store" Apr 16 03:20:26.191835 kubelet[2996]: I0416 03:20:26.189907 2996 kubelet.go:480] "Attempting to sync node with API server" Apr 16 03:20:26.209788 kubelet[2996]: I0416 03:20:26.207247 2996 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 03:20:26.209788 kubelet[2996]: I0416 03:20:26.209117 2996 kubelet.go:386] "Adding apiserver pod source" Apr 16 03:20:26.209788 kubelet[2996]: I0416 03:20:26.209625 2996 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 03:20:26.840925 kubelet[2996]: I0416 03:20:26.840797 2996 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 03:20:26.889762 kubelet[2996]: I0416 03:20:26.888511 2996 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 03:20:27.276645 kubelet[2996]: I0416 03:20:27.270487 2996 apiserver.go:52] "Watching apiserver" Apr 16 03:20:27.619987 kubelet[2996]: I0416 03:20:27.599915 2996 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 03:20:27.632884 kubelet[2996]: I0416 03:20:27.629374 2996 server.go:1289] "Started kubelet" Apr 16 03:20:27.826000 kubelet[2996]: I0416 03:20:27.823239 2996 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 03:20:28.054263 kubelet[2996]: I0416 03:20:28.042774 2996 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 03:20:28.163506 kubelet[2996]: I0416 03:20:28.154324 2996 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 03:20:28.250274 kubelet[2996]: I0416 03:20:28.250183 2996 server.go:317] "Adding debug handlers to kubelet server" Apr 16 03:20:28.273016 kubelet[2996]: I0416 03:20:28.272964 2996 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 03:20:28.304070 kubelet[2996]: I0416 03:20:28.303990 2996 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 03:20:28.379976 kubelet[2996]: I0416 03:20:28.376762 2996 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 03:20:28.485849 kubelet[2996]: I0416 03:20:28.478400 2996 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 03:20:28.581248 kubelet[2996]: I0416 03:20:28.573797 2996 reconciler.go:26] "Reconciler: start to sync state" Apr 16 03:20:28.858798 kubelet[2996]: E0416 03:20:28.850881 2996 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 03:20:28.887413 kubelet[2996]: I0416 03:20:28.886976 2996 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 03:20:28.913587 kubelet[2996]: I0416 03:20:28.913466 2996 factory.go:223] Registration of the systemd container factory successfully Apr 16 03:20:29.110993 kubelet[2996]: I0416 03:20:29.110052 2996 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 03:20:29.212951 kubelet[2996]: I0416 03:20:29.210555 2996 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 03:20:29.212951 kubelet[2996]: I0416 03:20:29.212302 2996 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 03:20:29.217526 kubelet[2996]: I0416 03:20:29.216761 2996 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 03:20:29.217526 kubelet[2996]: I0416 03:20:29.216908 2996 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 03:20:29.218866 kubelet[2996]: E0416 03:20:29.217314 2996 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 03:20:29.325194 kubelet[2996]: E0416 03:20:29.324104 2996 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 03:20:29.569167 kubelet[2996]: E0416 03:20:29.566379 2996 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 03:20:29.959709 kubelet[2996]: I0416 03:20:29.959525 2996 factory.go:223] Registration of the containerd container factory successfully Apr 16 03:20:29.972760 kubelet[2996]: E0416 03:20:29.972388 2996 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:20:31.185314 kubelet[2996]: E0416 03:20:31.137454 2996 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:20:32.946037 kubelet[2996]: E0416 03:20:32.941527 2996 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:20:36.155351 kubelet[2996]: E0416 03:20:36.154004 2996 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:20:41.172846 kubelet[2996]: E0416 03:20:41.168260 2996 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:20:41.884592 kubelet[2996]: I0416 03:20:41.884525 2996 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 03:20:41.884592 kubelet[2996]: I0416 03:20:41.884554 2996 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 03:20:41.899635 kubelet[2996]: I0416 03:20:41.884708 2996 state_mem.go:36] "Initialized new in-memory state store" Apr 16 03:20:41.899635 kubelet[2996]: I0416 03:20:41.885043 2996 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 03:20:41.899635 kubelet[2996]: I0416 03:20:41.885090 2996 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 03:20:41.899635 kubelet[2996]: I0416 03:20:41.885112 2996 policy_none.go:49] "None policy: Start" Apr 16 03:20:41.899635 kubelet[2996]: I0416 03:20:41.885123 2996 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 03:20:41.899635 kubelet[2996]: I0416 03:20:41.885134 2996 state_mem.go:35] "Initializing new in-memory state store" Apr 16 03:20:41.899635 kubelet[2996]: I0416 03:20:41.885261 2996 state_mem.go:75] "Updated machine memory state" Apr 16 03:20:42.018637 kubelet[2996]: E0416 03:20:42.007271 2996 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 03:20:42.056875 kubelet[2996]: I0416 03:20:42.056418 2996 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 03:20:42.071666 kubelet[2996]: I0416 03:20:42.056836 2996 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 03:20:42.090162 kubelet[2996]: I0416 03:20:42.089522 2996 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 03:20:43.107078 kubelet[2996]: E0416 03:20:43.101257 2996 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 03:20:43.897924 kubelet[2996]: I0416 03:20:43.883108 2996 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 03:20:45.820745 kubelet[2996]: I0416 03:20:45.815121 2996 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 03:20:45.958548 kubelet[2996]: I0416 03:20:45.957371 2996 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 03:20:46.896162 kubelet[2996]: I0416 03:20:46.891565 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:20:46.896162 kubelet[2996]: I0416 03:20:46.891621 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:20:46.896162 kubelet[2996]: I0416 03:20:46.891839 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:20:46.896162 kubelet[2996]: I0416 03:20:46.891861 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:20:46.896162 kubelet[2996]: I0416 03:20:46.891881 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:20:47.349272 kubelet[2996]: I0416 03:20:47.349206 2996 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 03:20:48.296033 kubelet[2996]: I0416 03:20:48.201905 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 03:20:50.117456 kubelet[2996]: I0416 03:20:50.116940 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08e6e58b5c66a8e05059ea871273285b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"08e6e58b5c66a8e05059ea871273285b\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:20:50.430364 kubelet[2996]: I0416 03:20:50.429074 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08e6e58b5c66a8e05059ea871273285b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"08e6e58b5c66a8e05059ea871273285b\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:20:50.441018 kubelet[2996]: I0416 03:20:50.440385 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08e6e58b5c66a8e05059ea871273285b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"08e6e58b5c66a8e05059ea871273285b\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:20:50.660783 kubelet[2996]: E0416 03:20:50.640569 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:51.176125 kubelet[2996]: E0416 03:20:51.162603 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:51.672411 kubelet[2996]: E0416 03:20:51.661221 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.403s" Apr 16 03:20:51.738106 kubelet[2996]: E0416 03:20:51.735266 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:55.980878 kubelet[2996]: E0416 03:20:55.978488 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:55.980878 kubelet[2996]: E0416 03:20:55.980776 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:56.163612 kubelet[2996]: E0416 03:20:55.981926 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:20:56.293841 kubelet[2996]: E0416 03:20:56.291980 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.239s" Apr 16 03:20:58.292029 kubelet[2996]: E0416 03:20:58.291557 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.992s" Apr 16 03:20:59.576913 kubelet[2996]: E0416 03:20:59.572077 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:21:00.723186 kubelet[2996]: E0416 03:21:00.723090 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.428s" Apr 16 03:21:00.765804 kubelet[2996]: E0416 03:21:00.762539 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:21:03.248214 kubelet[2996]: E0416 03:21:03.226784 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.481s" Apr 16 03:21:04.207025 kubelet[2996]: E0416 03:21:04.201182 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:21:04.389210 kubelet[2996]: E0416 03:21:04.385326 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:21:04.508031 kubelet[2996]: E0416 03:21:04.507551 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:21:06.394262 kubelet[2996]: E0416 03:21:06.380461 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:21:09.932893 kubelet[2996]: E0416 03:21:09.910501 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.164s" Apr 16 03:21:14.538262 kubelet[2996]: E0416 03:21:14.515971 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.605s" Apr 16 03:21:16.756971 kubelet[2996]: E0416 03:21:16.755980 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.239s" Apr 16 03:21:18.432426 kubelet[2996]: E0416 03:21:18.432217 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.676s" Apr 16 03:21:21.826218 kubelet[2996]: E0416 03:21:21.797029 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.335s" Apr 16 03:21:28.625020 kubelet[2996]: E0416 03:21:28.622524 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.302s" Apr 16 03:21:29.686792 kubelet[2996]: E0416 03:21:29.674626 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.052s" Apr 16 03:21:30.860914 kubelet[2996]: E0416 03:21:30.859145 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.15s" Apr 16 03:21:31.588819 kubelet[2996]: I0416 03:21:31.587168 2996 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 03:21:32.415804 containerd[1577]: time="2026-04-16T03:21:32.401884461Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 03:21:32.681207 kubelet[2996]: I0416 03:21:32.664720 2996 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 03:21:33.182446 kubelet[2996]: E0416 03:21:33.176873 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.93s" Apr 16 03:21:36.897419 kubelet[2996]: E0416 03:21:36.897379 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.498s" Apr 16 03:21:38.863256 kubelet[2996]: I0416 03:21:38.856308 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c89ec62-bbc9-4da6-94fe-7ffb08b28d48-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-w8fj8\" (UID: \"2c89ec62-bbc9-4da6-94fe-7ffb08b28d48\") " pod="tigera-operator/tigera-operator-6bf85f8dd-w8fj8" Apr 16 03:21:38.863256 kubelet[2996]: I0416 03:21:38.859076 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659vp\" (UniqueName: \"kubernetes.io/projected/2c89ec62-bbc9-4da6-94fe-7ffb08b28d48-kube-api-access-659vp\") pod \"tigera-operator-6bf85f8dd-w8fj8\" (UID: \"2c89ec62-bbc9-4da6-94fe-7ffb08b28d48\") " pod="tigera-operator/tigera-operator-6bf85f8dd-w8fj8" Apr 16 03:21:40.697429 kubelet[2996]: I0416 03:21:40.697336 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbr4\" (UniqueName: \"kubernetes.io/projected/7b2e046f-2151-42b4-ab53-17bcedf51dfd-kube-api-access-4rbr4\") pod \"kube-proxy-8pn9h\" (UID: \"7b2e046f-2151-42b4-ab53-17bcedf51dfd\") " pod="kube-system/kube-proxy-8pn9h" Apr 16 03:21:40.735911 kubelet[2996]: I0416 03:21:40.732027 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b2e046f-2151-42b4-ab53-17bcedf51dfd-kube-proxy\") pod \"kube-proxy-8pn9h\" (UID: \"7b2e046f-2151-42b4-ab53-17bcedf51dfd\") " pod="kube-system/kube-proxy-8pn9h" Apr 16 03:21:40.735911 kubelet[2996]: I0416 03:21:40.733578 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b2e046f-2151-42b4-ab53-17bcedf51dfd-lib-modules\") pod \"kube-proxy-8pn9h\" (UID: \"7b2e046f-2151-42b4-ab53-17bcedf51dfd\") " pod="kube-system/kube-proxy-8pn9h" Apr 16 03:21:40.735911 kubelet[2996]: I0416 03:21:40.733605 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b2e046f-2151-42b4-ab53-17bcedf51dfd-xtables-lock\") pod \"kube-proxy-8pn9h\" (UID: \"7b2e046f-2151-42b4-ab53-17bcedf51dfd\") " pod="kube-system/kube-proxy-8pn9h" Apr 16 03:21:41.290981 kubelet[2996]: E0416 03:21:41.290907 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.828s" Apr 16 03:21:42.366839 kubelet[2996]: E0416 03:21:42.366785 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.075s" Apr 16 03:21:42.477833 containerd[1577]: time="2026-04-16T03:21:42.477777254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-w8fj8,Uid:2c89ec62-bbc9-4da6-94fe-7ffb08b28d48,Namespace:tigera-operator,Attempt:0,}" Apr 16 03:21:42.868837 kubelet[2996]: E0416 03:21:42.866325 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:21:42.871315 containerd[1577]: time="2026-04-16T03:21:42.871270736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pn9h,Uid:7b2e046f-2151-42b4-ab53-17bcedf51dfd,Namespace:kube-system,Attempt:0,}" Apr 16 03:21:45.729767 containerd[1577]: time="2026-04-16T03:21:45.553315296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 03:21:45.729767 containerd[1577]: time="2026-04-16T03:21:45.553471731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 03:21:45.729767 containerd[1577]: time="2026-04-16T03:21:45.553492592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:21:45.729767 containerd[1577]: time="2026-04-16T03:21:45.639226151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:21:46.507436 kubelet[2996]: E0416 03:21:46.499677 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.127s" Apr 16 03:21:48.917003 kubelet[2996]: E0416 03:21:48.916969 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.606s" Apr 16 03:21:48.921651 containerd[1577]: time="2026-04-16T03:21:48.920988821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 03:21:48.921651 containerd[1577]: time="2026-04-16T03:21:48.921049948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 03:21:48.921651 containerd[1577]: time="2026-04-16T03:21:48.921066251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:21:48.921651 containerd[1577]: time="2026-04-16T03:21:48.921154462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 03:21:49.743064 containerd[1577]: time="2026-04-16T03:21:49.741520918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-w8fj8,Uid:2c89ec62-bbc9-4da6-94fe-7ffb08b28d48,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e601139c29cefbb9e8f3ae8db0c4aea214d94c71203f7cce2e77510f85ab0d5d\"" Apr 16 03:21:50.922780 containerd[1577]: time="2026-04-16T03:21:50.919468916Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 03:21:50.941336 containerd[1577]: time="2026-04-16T03:21:50.940384131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pn9h,Uid:7b2e046f-2151-42b4-ab53-17bcedf51dfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f822042a1ae3c80b4c7608f1807760d3971f93bcfaa971d26e2f4a0567b385f\"" Apr 16 03:21:50.979344 kubelet[2996]: E0416 03:21:50.978913 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:21:51.960824 containerd[1577]: time="2026-04-16T03:21:51.960741754Z" level=info msg="CreateContainer within sandbox \"3f822042a1ae3c80b4c7608f1807760d3971f93bcfaa971d26e2f4a0567b385f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 03:21:52.533813 kubelet[2996]: E0416 03:21:52.522199 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.027s" Apr 16 03:21:54.004459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235089041.mount: Deactivated successfully. Apr 16 03:21:55.301563 containerd[1577]: time="2026-04-16T03:21:55.301354934Z" level=info msg="CreateContainer within sandbox \"3f822042a1ae3c80b4c7608f1807760d3971f93bcfaa971d26e2f4a0567b385f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"19c1b1de124eea3684fc7712a3c43973f19d58f056dd945eb1ceb62e956398f0\"" Apr 16 03:21:55.435250 containerd[1577]: time="2026-04-16T03:21:55.434632602Z" level=info msg="StartContainer for \"19c1b1de124eea3684fc7712a3c43973f19d58f056dd945eb1ceb62e956398f0\"" Apr 16 03:21:55.438852 kubelet[2996]: E0416 03:21:55.436620 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.632s" Apr 16 03:21:57.063259 kubelet[2996]: E0416 03:21:57.063122 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.626s" Apr 16 03:21:58.778633 kubelet[2996]: E0416 03:21:58.778467 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.434s" Apr 16 03:21:59.676194 systemd[1]: run-containerd-runc-k8s.io-19c1b1de124eea3684fc7712a3c43973f19d58f056dd945eb1ceb62e956398f0-runc.bHzRRH.mount: Deactivated successfully. Apr 16 03:22:00.603388 kubelet[2996]: E0416 03:22:00.587207 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.132s" Apr 16 03:22:02.885567 containerd[1577]: time="2026-04-16T03:22:02.884956920Z" level=info msg="StartContainer for \"19c1b1de124eea3684fc7712a3c43973f19d58f056dd945eb1ceb62e956398f0\" returns successfully" Apr 16 03:22:03.639833 kubelet[2996]: E0416 03:22:03.637815 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:22:04.689585 kubelet[2996]: I0416 03:22:04.689460 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8pn9h" podStartSLOduration=28.689392811 podStartE2EDuration="28.689392811s" podCreationTimestamp="2026-04-16 03:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 03:22:04.566150204 +0000 UTC m=+106.845380843" watchObservedRunningTime="2026-04-16 03:22:04.689392811 +0000 UTC m=+106.968623460" Apr 16 03:22:04.799054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857054090.mount: Deactivated successfully. Apr 16 03:22:04.905186 kubelet[2996]: E0416 03:22:04.905124 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:22:07.737783 kubelet[2996]: E0416 03:22:07.734590 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:22:16.399928 kubelet[2996]: E0416 03:22:16.399384 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.151s" Apr 16 03:22:22.388152 kubelet[2996]: E0416 03:22:22.383338 2996 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.133s" Apr 16 03:22:23.294538 kubelet[2996]: E0416 03:22:23.289068 2996 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:22:24.974089 sudo[1769]: pam_unix(sudo:session): session closed for user root Apr 16 03:22:25.000736 sshd[1762]: pam_unix(sshd:session): session closed for user core Apr 16 03:22:25.077233 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:46018.service: Deactivated successfully. Apr 16 03:22:25.137044 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 03:22:25.161070 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. Apr 16 03:22:25.218262 systemd-logind[1554]: Removed session 7.