Apr 28 00:52:31.174310 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 00:52:31.174341 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:52:31.174356 kernel: BIOS-provided physical RAM map: Apr 28 00:52:31.174363 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 28 00:52:31.174370 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 28 00:52:31.174377 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 28 00:52:31.174386 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 28 00:52:31.174395 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 28 00:52:31.174403 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 00:52:31.174970 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 28 00:52:31.175007 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 28 00:52:31.175014 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 28 00:52:31.175044 kernel: NX (Execute Disable) protection: active Apr 28 00:52:31.175051 kernel: APIC: Static calls initialized Apr 28 00:52:31.175061 kernel: SMBIOS 2.8 present. Apr 28 00:52:31.175090 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 28 00:52:31.175099 kernel: Hypervisor detected: KVM Apr 28 00:52:31.175107 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 00:52:31.175115 kernel: kvm-clock: using sched offset of 10681488235 cycles Apr 28 00:52:31.175123 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 00:52:31.175131 kernel: tsc: Detected 2793.438 MHz processor Apr 28 00:52:31.175139 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 00:52:31.175147 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 00:52:31.175155 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 28 00:52:31.175167 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 28 00:52:31.175199 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 00:52:31.175207 kernel: Using GB pages for direct mapping Apr 28 00:52:31.175214 kernel: ACPI: Early table checksum verification disabled Apr 28 00:52:31.175222 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 28 00:52:31.175229 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:31.175237 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:31.175244 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:31.175252 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 28 00:52:31.175262 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:31.175270 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:31.175277 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:31.175285 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:31.175293 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 28 00:52:31.175301 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 28 00:52:31.175310 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 28 00:52:31.175322 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 28 00:52:31.175333 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 28 00:52:31.175340 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 28 00:52:31.175348 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 28 00:52:31.175356 kernel: No NUMA configuration found Apr 28 00:52:31.175364 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 28 00:52:31.175373 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 28 00:52:31.175384 kernel: Zone ranges: Apr 28 00:52:31.175392 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 00:52:31.175401 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 28 00:52:31.175410 kernel: Normal empty Apr 28 00:52:31.175455 kernel: Movable zone start for each node Apr 28 00:52:31.175463 kernel: Early memory node ranges Apr 28 00:52:31.175472 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 28 00:52:31.175480 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 28 00:52:31.175489 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 28 00:52:31.175498 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 00:52:31.175509 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 28 00:52:31.175532 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 28 00:52:31.175541 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 00:52:31.175549 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 00:52:31.175558 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 00:52:31.175567 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 00:52:31.175576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 00:52:31.175585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 00:52:31.175594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 00:52:31.175604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 00:52:31.175612 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 00:52:31.175621 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 00:52:31.175629 kernel: TSC deadline timer available Apr 28 00:52:31.175637 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 00:52:31.175645 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 00:52:31.175653 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 00:52:31.175660 kernel: kvm-guest: setup PV sched yield Apr 28 00:52:31.175683 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 28 00:52:31.175694 kernel: Booting paravirtualized kernel on KVM Apr 28 00:52:31.175702 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 00:52:31.175710 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 00:52:31.175718 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 00:52:31.175726 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 00:52:31.175734 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 00:52:31.175743 kernel: kvm-guest: PV spinlocks enabled Apr 28 00:52:31.175751 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 00:52:31.175761 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:52:31.175771 kernel: random: crng init done Apr 28 00:52:31.175779 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 00:52:31.175787 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 00:52:31.175796 kernel: Fallback order for Node 0: 0 Apr 28 00:52:31.175803 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 28 00:52:31.175811 kernel: Policy zone: DMA32 Apr 28 00:52:31.175819 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 00:52:31.175844 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 137896K reserved, 0K cma-reserved) Apr 28 00:52:31.175859 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 00:52:31.175868 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 00:52:31.175876 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 00:52:31.175884 kernel: Dynamic Preempt: voluntary Apr 28 00:52:31.175892 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 00:52:31.175901 kernel: rcu: RCU event tracing is enabled. Apr 28 00:52:31.175909 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 00:52:31.175932 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 00:52:31.175953 kernel: Rude variant of Tasks RCU enabled. Apr 28 00:52:31.175966 kernel: Tracing variant of Tasks RCU enabled. Apr 28 00:52:31.175974 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 00:52:31.175997 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 00:52:31.176005 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 00:52:31.176215 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 00:52:31.176238 kernel: Console: colour VGA+ 80x25 Apr 28 00:52:31.176248 kernel: printk: console [ttyS0] enabled Apr 28 00:52:31.176270 kernel: ACPI: Core revision 20230628 Apr 28 00:52:31.176279 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 00:52:31.176290 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 00:52:31.176299 kernel: x2apic enabled Apr 28 00:52:31.176307 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 00:52:31.176316 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 00:52:31.176325 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 00:52:31.176333 kernel: kvm-guest: setup PV IPIs Apr 28 00:52:31.176340 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 00:52:31.176348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:52:31.176366 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 00:52:31.176375 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 00:52:31.176384 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 00:52:31.176393 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 00:52:31.176403 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 00:52:31.187370 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 00:52:31.187440 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 00:52:31.187464 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 00:52:31.187482 kernel: RETBleed: Vulnerable Apr 28 00:52:31.187492 kernel: Speculative Store Bypass: Vulnerable Apr 28 00:52:31.187503 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 00:52:31.188040 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 00:52:31.188054 kernel: active return thunk: its_return_thunk Apr 28 00:52:31.188063 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 00:52:31.188073 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 00:52:31.188082 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 00:52:31.188091 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 00:52:31.188154 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 00:52:31.188164 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 00:52:31.188197 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 00:52:31.188206 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 00:52:31.188216 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 00:52:31.188224 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 00:52:31.188233 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 00:52:31.188242 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 00:52:31.188251 kernel: Freeing SMP alternatives memory: 32K Apr 28 00:52:31.188263 kernel: pid_max: default: 32768 minimum: 301 Apr 28 00:52:31.188272 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 00:52:31.188281 kernel: landlock: Up and running. Apr 28 00:52:31.188290 kernel: SELinux: Initializing. Apr 28 00:52:31.188300 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:52:31.188310 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:52:31.188319 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 00:52:31.188345 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:52:31.188367 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:52:31.188392 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:52:31.188486 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 00:52:31.188498 kernel: signal: max sigframe size: 3632 Apr 28 00:52:31.188507 kernel: rcu: Hierarchical SRCU implementation. Apr 28 00:52:31.188517 kernel: rcu: Max phase no-delay instances is 400. Apr 28 00:52:31.188526 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 00:52:31.188535 kernel: smp: Bringing up secondary CPUs ... Apr 28 00:52:31.188544 kernel: smpboot: x86: Booting SMP configuration: Apr 28 00:52:31.188553 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 00:52:31.188579 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 00:52:31.188599 kernel: smpboot: Max logical packages: 1 Apr 28 00:52:31.188608 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 00:52:31.188617 kernel: devtmpfs: initialized Apr 28 00:52:31.188626 kernel: x86/mm: Memory block size: 128MB Apr 28 00:52:31.188635 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 00:52:31.188644 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 00:52:31.188652 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 00:52:31.188661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 00:52:31.188673 kernel: audit: initializing netlink subsys (disabled) Apr 28 00:52:31.188682 kernel: audit: type=2000 audit(1777337546.739:1): state=initialized audit_enabled=0 res=1 Apr 28 00:52:31.188691 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 00:52:31.188700 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 00:52:31.188709 kernel: cpuidle: using governor menu Apr 28 00:52:31.188718 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 00:52:31.188727 kernel: dca service started, version 1.12.1 Apr 28 00:52:31.188736 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 00:52:31.188745 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 00:52:31.188756 kernel: PCI: Using configuration type 1 for base access Apr 28 00:52:31.188765 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 00:52:31.188774 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 00:52:31.188783 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 00:52:31.188793 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 00:52:31.188803 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 00:52:31.188812 kernel: ACPI: Added _OSI(Module Device) Apr 28 00:52:31.188822 kernel: ACPI: Added _OSI(Processor Device) Apr 28 00:52:31.188831 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 00:52:31.188843 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 00:52:31.188853 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 00:52:31.188863 kernel: ACPI: Interpreter enabled Apr 28 00:52:31.188874 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 00:52:31.188885 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 00:52:31.188895 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 00:52:31.188905 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 00:52:31.188916 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 00:52:31.188927 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 00:52:31.190829 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 00:52:31.190984 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 00:52:31.191080 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 00:52:31.191092 kernel: PCI host bridge to bus 0000:00 Apr 28 00:52:31.197086 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 00:52:31.197277 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 00:52:31.197384 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 00:52:31.200513 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 00:52:31.220779 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 00:52:31.246687 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 28 00:52:31.247840 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 00:52:31.249003 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 00:52:31.249209 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 00:52:31.249315 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 28 00:52:31.249401 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 28 00:52:31.252349 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 28 00:52:31.253764 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 00:52:31.254140 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 00:52:31.255101 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 28 00:52:31.255276 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 28 00:52:31.255406 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 28 00:52:31.255600 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 00:52:31.255696 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 28 00:52:31.255790 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 28 00:52:31.255880 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 28 00:52:31.256758 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 00:52:31.256938 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 28 00:52:31.257034 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 28 00:52:31.257121 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 28 00:52:31.258998 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 28 00:52:31.259680 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 00:52:31.259779 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 00:52:31.259870 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 11718 usecs Apr 28 00:52:31.260909 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 00:52:31.261012 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 28 00:52:31.261082 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 28 00:52:31.261689 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 00:52:31.261775 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 28 00:52:31.261783 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 00:52:31.261790 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 00:52:31.261796 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 00:52:31.261808 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 00:52:31.261814 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 00:52:31.261820 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 00:52:31.261825 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 00:52:31.261831 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 00:52:31.261837 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 00:52:31.261843 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 00:52:31.261849 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 00:52:31.261854 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 00:52:31.261862 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 00:52:31.261868 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 00:52:31.261873 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 00:52:31.261879 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 00:52:31.261884 kernel: iommu: Default domain type: Translated Apr 28 00:52:31.261890 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 00:52:31.261896 kernel: PCI: Using ACPI for IRQ routing Apr 28 00:52:31.261902 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 00:52:31.261908 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 28 00:52:31.261915 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 28 00:52:31.261978 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 00:52:31.262039 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 00:52:31.262100 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 00:52:31.262107 kernel: vgaarb: loaded Apr 28 00:52:31.262113 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 00:52:31.262119 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 00:52:31.262124 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 00:52:31.262132 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 00:52:31.262138 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 00:52:31.262144 kernel: pnp: PnP ACPI init Apr 28 00:52:31.262747 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 00:52:31.262774 kernel: pnp: PnP ACPI: found 6 devices Apr 28 00:52:31.262781 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 00:52:31.262787 kernel: NET: Registered PF_INET protocol family Apr 28 00:52:31.262793 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 00:52:31.262799 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 00:52:31.262809 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 00:52:31.262814 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 00:52:31.262820 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 00:52:31.262826 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 00:52:31.262835 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:52:31.262841 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:52:31.262847 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 00:52:31.262852 kernel: NET: Registered PF_XDP protocol family Apr 28 00:52:31.262919 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 00:52:31.262976 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 00:52:31.263029 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 00:52:31.263082 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 00:52:31.263137 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 00:52:31.263302 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 28 00:52:31.263319 kernel: PCI: CLS 0 bytes, default 64 Apr 28 00:52:31.263328 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 00:52:31.263338 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:52:31.263352 kernel: Initialise system trusted keyrings Apr 28 00:52:31.263362 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 00:52:31.263371 kernel: Key type asymmetric registered Apr 28 00:52:31.263380 kernel: Asymmetric key parser 'x509' registered Apr 28 00:52:31.263390 kernel: hrtimer: interrupt took 16692200 ns Apr 28 00:52:31.263400 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 00:52:31.263409 kernel: io scheduler mq-deadline registered Apr 28 00:52:31.263963 kernel: io scheduler kyber registered Apr 28 00:52:31.263982 kernel: io scheduler bfq registered Apr 28 00:52:31.263991 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 00:52:31.264001 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 00:52:31.264010 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 00:52:31.264019 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 00:52:31.264029 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 00:52:31.264038 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 00:52:31.264047 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 00:52:31.264055 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 00:52:31.264067 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 00:52:31.264742 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 00:52:31.264792 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 00:52:31.264880 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 00:52:31.264959 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T00:52:29 UTC (1777337549) Apr 28 00:52:31.264970 kernel: hpet: Lost 2 RTC interrupts Apr 28 00:52:31.265044 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 28 00:52:31.265055 kernel: intel_pstate: CPU model not supported Apr 28 00:52:31.265099 kernel: NET: Registered PF_INET6 protocol family Apr 28 00:52:31.265108 kernel: Segment Routing with IPv6 Apr 28 00:52:31.265117 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 00:52:31.265126 kernel: NET: Registered PF_PACKET protocol family Apr 28 00:52:31.265134 kernel: Key type dns_resolver registered Apr 28 00:52:31.265143 kernel: IPI shorthand broadcast: enabled Apr 28 00:52:31.265151 kernel: sched_clock: Marking stable (3326059080, 622781782)->(4412188901, -463348039) Apr 28 00:52:31.265160 kernel: registered taskstats version 1 Apr 28 00:52:31.265169 kernel: Loading compiled-in X.509 certificates Apr 28 00:52:31.265213 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 00:52:31.265223 kernel: Key type .fscrypt registered Apr 28 00:52:31.265232 kernel: Key type fscrypt-provisioning registered Apr 28 00:52:31.265241 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 00:52:31.265250 kernel: ima: Allocated hash algorithm: sha1 Apr 28 00:52:31.265260 kernel: ima: No architecture policies found Apr 28 00:52:31.265271 kernel: clk: Disabling unused clocks Apr 28 00:52:31.265281 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 00:52:31.265287 kernel: Write protecting the kernel read-only data: 36864k Apr 28 00:52:31.265296 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 00:52:31.265302 kernel: Run /init as init process Apr 28 00:52:31.265308 kernel: with arguments: Apr 28 00:52:31.265314 kernel: /init Apr 28 00:52:31.265319 kernel: with environment: Apr 28 00:52:31.265325 kernel: HOME=/ Apr 28 00:52:31.265331 kernel: TERM=linux Apr 28 00:52:31.265341 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:52:31.265349 systemd[1]: Detected virtualization kvm. Apr 28 00:52:31.265357 systemd[1]: Detected architecture x86-64. Apr 28 00:52:31.265363 systemd[1]: Running in initrd. Apr 28 00:52:31.265369 systemd[1]: No hostname configured, using default hostname. Apr 28 00:52:31.265375 systemd[1]: Hostname set to . Apr 28 00:52:31.265381 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:52:31.265387 systemd[1]: Queued start job for default target initrd.target. Apr 28 00:52:31.265393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:52:31.265400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:52:31.265409 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 00:52:31.265882 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:52:31.265896 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 00:52:31.265906 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 00:52:31.265917 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 00:52:31.265930 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 00:52:31.265939 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:52:31.265949 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:52:31.265959 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:52:31.265968 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:52:31.265978 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:52:31.265988 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:52:31.265999 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:52:31.266014 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:52:31.266025 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 00:52:31.266036 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 00:52:31.266046 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:52:31.266053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:52:31.266059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:52:31.266066 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:52:31.266072 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 00:52:31.266080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:52:31.266087 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 00:52:31.266093 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 00:52:31.266099 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:52:31.266108 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:52:31.266119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:52:31.266129 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 00:52:31.266135 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:52:31.266235 systemd-journald[195]: Collecting audit messages is disabled. Apr 28 00:52:31.266259 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 00:52:31.266267 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:52:31.266274 systemd-journald[195]: Journal started Apr 28 00:52:31.266331 systemd-journald[195]: Runtime Journal (/run/log/journal/943e2ad6079140b2bd4d204214d9158c) is 6.0M, max 48.4M, 42.3M free. Apr 28 00:52:31.200446 systemd-modules-load[196]: Inserted module 'overlay' Apr 28 00:52:31.534041 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:52:31.534099 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 00:52:31.534113 kernel: Bridge firewalling registered Apr 28 00:52:31.333085 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 28 00:52:31.535509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:52:31.536767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:52:31.570594 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:52:31.574319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:52:31.582660 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:52:31.590615 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:31.602954 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:52:31.660355 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:52:31.673704 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:52:31.675628 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:52:31.681313 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:52:31.719677 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:52:31.726559 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 00:52:31.801107 dracut-cmdline[235]: dracut-dracut-053 Apr 28 00:52:31.845203 systemd-resolved[223]: Positive Trust Anchors: Apr 28 00:52:31.845276 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:52:31.845316 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:52:31.894199 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:52:31.852295 systemd-resolved[223]: Defaulting to hostname 'linux'. Apr 28 00:52:31.856644 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:52:31.881507 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:52:32.330865 kernel: SCSI subsystem initialized Apr 28 00:52:32.382675 kernel: Loading iSCSI transport class v2.0-870. Apr 28 00:52:32.448946 kernel: iscsi: registered transport (tcp) Apr 28 00:52:32.582930 kernel: iscsi: registered transport (qla4xxx) Apr 28 00:52:32.585821 kernel: QLogic iSCSI HBA Driver Apr 28 00:52:32.934683 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 00:52:32.985107 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 00:52:33.148383 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 00:52:33.148691 kernel: device-mapper: uevent: version 1.0.3 Apr 28 00:52:33.148703 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 00:52:33.348884 kernel: raid6: avx512x4 gen() 19914 MB/s Apr 28 00:52:33.376822 kernel: raid6: avx512x2 gen() 21452 MB/s Apr 28 00:52:33.394754 kernel: raid6: avx512x1 gen() 18988 MB/s Apr 28 00:52:33.435247 kernel: raid6: avx2x4 gen() 5395 MB/s Apr 28 00:52:33.452900 kernel: raid6: avx2x2 gen() 17472 MB/s Apr 28 00:52:33.474741 kernel: raid6: avx2x1 gen() 15568 MB/s Apr 28 00:52:33.477028 kernel: raid6: using algorithm avx512x2 gen() 21452 MB/s Apr 28 00:52:33.494443 kernel: raid6: .... xor() 8414 MB/s, rmw enabled Apr 28 00:52:33.494679 kernel: raid6: using avx512x2 recovery algorithm Apr 28 00:52:33.611629 kernel: xor: automatically using best checksumming function avx Apr 28 00:52:34.151829 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 00:52:34.297611 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:52:34.406331 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:52:34.501117 systemd-udevd[417]: Using default interface naming scheme 'v255'. Apr 28 00:52:34.558713 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:52:34.579686 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 00:52:34.635846 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Apr 28 00:52:34.853150 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:52:34.870994 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:52:35.064501 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:52:35.082705 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 00:52:35.135719 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 00:52:35.139909 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:52:35.145089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:52:35.149520 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:52:35.156460 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 00:52:35.167636 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 00:52:35.176619 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 00:52:35.192609 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 00:52:35.192847 kernel: libata version 3.00 loaded. Apr 28 00:52:35.200296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:52:35.235556 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 00:52:35.200897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:52:35.243530 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 00:52:35.243714 kernel: AES CTR mode by8 optimization enabled Apr 28 00:52:35.243730 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 00:52:35.244660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:52:35.259833 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 00:52:35.259855 kernel: GPT:9289727 != 19775487 Apr 28 00:52:35.259867 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 00:52:35.259879 kernel: GPT:9289727 != 19775487 Apr 28 00:52:35.259890 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 00:52:35.259901 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:52:35.259915 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 00:52:35.263773 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 00:52:35.258236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:52:35.270371 kernel: scsi host0: ahci Apr 28 00:52:35.270601 kernel: scsi host1: ahci Apr 28 00:52:35.261934 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:35.266954 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:52:35.275967 kernel: scsi host2: ahci Apr 28 00:52:35.277915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:52:35.324647 kernel: scsi host3: ahci Apr 28 00:52:35.324827 kernel: scsi host4: ahci Apr 28 00:52:35.324939 kernel: scsi host5: ahci Apr 28 00:52:35.325043 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 28 00:52:35.325055 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 28 00:52:35.325066 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 28 00:52:35.325078 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 28 00:52:35.325089 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 28 00:52:35.325102 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 28 00:52:35.325114 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Apr 28 00:52:35.325126 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (474) Apr 28 00:52:35.305514 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:52:35.348138 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 00:52:35.365457 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 00:52:35.716512 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:35.716538 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 00:52:35.716547 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:35.716554 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:35.716561 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 00:52:35.716570 kernel: ata3.00: applying bridge limits Apr 28 00:52:35.716578 kernel: ata3.00: configured for UDMA/100 Apr 28 00:52:35.716585 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:35.716592 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:35.716598 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 00:52:35.397293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:52:35.721283 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 00:52:35.723357 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 00:52:35.742063 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 00:52:35.747301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:35.754187 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:52:35.771906 disk-uuid[568]: Primary Header is updated. Apr 28 00:52:35.771906 disk-uuid[568]: Secondary Entries is updated. Apr 28 00:52:35.771906 disk-uuid[568]: Secondary Header is updated. Apr 28 00:52:35.796945 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 00:52:35.835734 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 00:52:35.844536 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:52:35.844603 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 00:52:35.853528 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:52:35.880720 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:52:36.873499 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:52:36.880514 disk-uuid[571]: The operation has completed successfully. Apr 28 00:52:37.220670 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 00:52:37.225375 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 00:52:37.247118 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 00:52:37.317564 sh[594]: Success Apr 28 00:52:37.431262 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 00:52:37.757044 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 00:52:37.788806 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 00:52:37.865238 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 00:52:37.896031 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 00:52:37.897700 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:52:37.897810 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 00:52:37.906739 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 00:52:37.929335 kernel: BTRFS info (device dm-0): using free space tree Apr 28 00:52:38.001659 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 00:52:38.078739 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 00:52:38.098885 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 00:52:38.113692 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 00:52:38.158111 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:38.158363 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:52:38.158377 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:52:38.190909 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:52:38.280399 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 00:52:38.288881 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:38.306749 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 00:52:38.328891 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 00:52:38.570786 ignition[684]: Ignition 2.19.0 Apr 28 00:52:38.570820 ignition[684]: Stage: fetch-offline Apr 28 00:52:38.570858 ignition[684]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:38.570865 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:38.570964 ignition[684]: parsed url from cmdline: "" Apr 28 00:52:38.570967 ignition[684]: no config URL provided Apr 28 00:52:38.570970 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:52:38.570976 ignition[684]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:52:38.571043 ignition[684]: op(1): [started] loading QEMU firmware config module Apr 28 00:52:38.571047 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 00:52:38.629204 ignition[684]: op(1): [finished] loading QEMU firmware config module Apr 28 00:52:38.745918 ignition[684]: parsing config with SHA512: 45591cb5bd7daccb86b80728628c37b2c86aae56fc9a3c553f9d8b1d5f4c5d59d25b16edaa1eb36c946f9bb40c2e3fcdc564406965dfa401a20476887e385ca6 Apr 28 00:52:38.754818 unknown[684]: fetched base config from "system" Apr 28 00:52:38.754841 unknown[684]: fetched user config from "qemu" Apr 28 00:52:38.755289 ignition[684]: fetch-offline: fetch-offline passed Apr 28 00:52:38.759272 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:52:38.755351 ignition[684]: Ignition finished successfully Apr 28 00:52:38.949467 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:52:39.070280 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:52:39.190374 systemd-networkd[783]: lo: Link UP Apr 28 00:52:39.191199 systemd-networkd[783]: lo: Gained carrier Apr 28 00:52:39.203965 systemd-networkd[783]: Enumeration completed Apr 28 00:52:39.207941 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:52:39.236654 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:52:39.236660 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:52:39.237129 systemd[1]: Reached target network.target - Network. Apr 28 00:52:39.239746 systemd-networkd[783]: eth0: Link UP Apr 28 00:52:39.239750 systemd-networkd[783]: eth0: Gained carrier Apr 28 00:52:39.239760 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:52:39.241907 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 00:52:39.263845 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 00:52:39.272010 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:52:39.316188 ignition[785]: Ignition 2.19.0 Apr 28 00:52:39.316251 ignition[785]: Stage: kargs Apr 28 00:52:39.316532 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:39.316544 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:39.330188 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 00:52:39.317719 ignition[785]: kargs: kargs passed Apr 28 00:52:39.317780 ignition[785]: Ignition finished successfully Apr 28 00:52:39.346029 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 00:52:39.537903 ignition[795]: Ignition 2.19.0 Apr 28 00:52:39.537939 ignition[795]: Stage: disks Apr 28 00:52:39.538154 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:39.546891 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 00:52:39.538173 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:39.541550 ignition[795]: disks: disks passed Apr 28 00:52:39.541783 ignition[795]: Ignition finished successfully Apr 28 00:52:39.567023 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 00:52:39.576301 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 00:52:39.581314 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:52:39.590302 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:52:39.612922 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:52:39.644946 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 00:52:39.887401 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 00:52:39.920720 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 00:52:39.965091 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 00:52:40.547583 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 00:52:40.549297 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 00:52:40.566829 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 00:52:40.639174 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:52:40.645340 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 00:52:40.647739 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 00:52:40.647794 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 00:52:40.699952 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Apr 28 00:52:40.647821 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:52:40.729664 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 00:52:40.744860 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:40.744889 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:52:40.744901 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:52:40.766731 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:52:40.769330 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 00:52:40.792785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:52:40.983763 systemd-networkd[783]: eth0: Gained IPv6LL Apr 28 00:52:41.148600 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 00:52:41.176736 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Apr 28 00:52:41.214782 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 00:52:41.289972 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 00:52:42.037346 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 00:52:42.072071 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 00:52:42.126198 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 00:52:42.171684 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 00:52:42.179704 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:42.206188 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 00:52:42.793926 ignition[929]: INFO : Ignition 2.19.0 Apr 28 00:52:42.793926 ignition[929]: INFO : Stage: mount Apr 28 00:52:42.830724 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:42.830724 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:42.830724 ignition[929]: INFO : mount: mount passed Apr 28 00:52:42.844311 ignition[929]: INFO : Ignition finished successfully Apr 28 00:52:42.834044 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 00:52:42.862949 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 00:52:43.037288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:52:43.133627 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Apr 28 00:52:43.172940 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:43.173163 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:52:43.178230 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:52:43.209687 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:52:43.219030 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:52:43.904997 ignition[959]: INFO : Ignition 2.19.0 Apr 28 00:52:43.904997 ignition[959]: INFO : Stage: files Apr 28 00:52:43.939157 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:43.939157 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:43.939157 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Apr 28 00:52:43.990868 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 00:52:43.990868 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 00:52:44.037744 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 00:52:44.041778 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 00:52:44.047848 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 00:52:44.042055 unknown[959]: wrote ssh authorized keys file for user: core Apr 28 00:52:44.065037 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 28 00:52:44.070862 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 28 00:52:44.070862 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:52:44.070862 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 00:52:44.243973 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 28 00:52:44.739893 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:52:44.749379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 28 00:52:44.749379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 00:52:44.749379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:52:44.749379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:52:44.749379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:52:44.749379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:52:44.749379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:52:44.749379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:52:44.749379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:52:44.843374 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:52:44.843374 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 00:52:44.843374 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 00:52:44.843374 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 00:52:44.843374 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 28 00:52:45.300710 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 28 00:52:50.750502 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 28 00:52:50.750502 ignition[959]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 28 00:52:50.789581 ignition[959]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 28 00:52:50.823591 ignition[959]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 28 00:52:50.823591 ignition[959]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 28 00:52:50.823591 ignition[959]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 28 00:52:50.841367 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:52:50.841367 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:52:50.841367 ignition[959]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 28 00:52:50.841367 ignition[959]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 28 00:52:50.841367 ignition[959]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:52:50.841367 ignition[959]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:52:50.841367 ignition[959]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 28 00:52:50.841367 ignition[959]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 00:52:51.442667 ignition[959]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:52:51.596024 ignition[959]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:52:51.610211 ignition[959]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 00:52:51.610211 ignition[959]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 28 00:52:51.610211 ignition[959]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 00:52:51.627179 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:52:51.627179 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:52:51.627179 ignition[959]: INFO : files: files passed Apr 28 00:52:51.627179 ignition[959]: INFO : Ignition finished successfully Apr 28 00:52:51.658616 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 00:52:51.747929 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 00:52:51.765777 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 00:52:51.848262 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 00:52:51.852888 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 00:52:51.853036 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 00:52:51.890954 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:52:51.890954 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:52:51.944190 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:52:51.957818 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:52:51.972672 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 00:52:51.996997 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 00:52:52.240134 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 00:52:52.240459 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 00:52:52.274669 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 00:52:52.291203 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 00:52:52.368883 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 00:52:52.391014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 00:52:52.564250 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:52:52.676977 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 00:52:52.984173 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:52:53.019848 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:52:53.025191 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 00:52:53.035679 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 00:52:53.036080 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:52:53.048400 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 00:52:53.069079 systemd[1]: Stopped target basic.target - Basic System. Apr 28 00:52:53.071781 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 00:52:53.083583 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:52:53.093766 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 00:52:53.098842 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 00:52:53.131095 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:52:53.142139 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 00:52:53.171519 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 00:52:53.194136 systemd[1]: Stopped target swap.target - Swaps. Apr 28 00:52:53.196980 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 00:52:53.197770 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:52:53.249366 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:52:53.269098 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:52:53.273176 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 00:52:53.278288 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:52:53.291040 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 00:52:53.292065 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 00:52:53.299740 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 00:52:53.300057 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:52:53.370499 systemd[1]: Stopped target paths.target - Path Units. Apr 28 00:52:53.382117 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 00:52:53.392062 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:52:53.396610 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 00:52:53.415507 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 00:52:53.426757 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 00:52:53.426963 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:52:53.430756 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 00:52:53.430865 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:52:53.431072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 00:52:53.431175 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:52:53.431464 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 00:52:53.431573 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 00:52:53.457094 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 00:52:53.471061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 00:52:53.477719 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 00:52:53.477993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:52:53.484937 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 00:52:53.494698 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:52:53.559730 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 00:52:53.559861 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 00:52:53.595810 ignition[1013]: INFO : Ignition 2.19.0 Apr 28 00:52:53.595810 ignition[1013]: INFO : Stage: umount Apr 28 00:52:53.600335 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:53.600335 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:53.600335 ignition[1013]: INFO : umount: umount passed Apr 28 00:52:53.600335 ignition[1013]: INFO : Ignition finished successfully Apr 28 00:52:53.600752 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 00:52:53.602704 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 00:52:53.609975 systemd[1]: Stopped target network.target - Network. Apr 28 00:52:53.616383 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 00:52:53.616558 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 00:52:53.619891 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 00:52:53.619952 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 00:52:53.623505 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 00:52:53.623615 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 00:52:53.624000 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 00:52:53.624844 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 00:52:53.628481 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 00:52:53.635019 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 00:52:53.642562 systemd-networkd[783]: eth0: DHCPv6 lease lost Apr 28 00:52:53.643671 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 00:52:53.644765 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 00:52:53.644875 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 00:52:53.653558 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 00:52:53.653805 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 00:52:53.682050 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 00:52:53.689254 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 00:52:53.767023 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 00:52:53.767089 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:52:53.775156 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 00:52:53.775290 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 00:52:53.825869 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 00:52:53.827560 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 00:52:53.827743 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:52:53.837175 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 00:52:53.837996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:52:53.847293 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 00:52:53.848908 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 00:52:53.862087 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 00:52:53.862238 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:52:53.885376 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:52:53.997232 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 00:52:54.055191 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:52:54.074251 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 00:52:54.078109 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 00:52:54.099151 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 00:52:54.101275 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:52:54.134642 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 00:52:54.134855 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:52:54.142807 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 00:52:54.142933 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 00:52:54.161770 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:52:54.161901 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:52:54.192011 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 00:52:54.198692 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 00:52:54.198957 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:52:54.201724 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 28 00:52:54.202091 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:52:54.255031 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 00:52:54.256019 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:52:54.281223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:52:54.283822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:54.311012 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 00:52:54.332298 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 00:52:54.342760 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 00:52:54.344889 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 00:52:54.358210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 00:52:54.393130 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 00:52:54.577052 systemd[1]: Switching root. Apr 28 00:52:54.781389 systemd-journald[195]: Journal stopped Apr 28 00:53:05.479083 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 28 00:53:05.479259 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 00:53:05.479317 kernel: SELinux: policy capability open_perms=1 Apr 28 00:53:05.479329 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 00:53:05.479337 kernel: SELinux: policy capability always_check_network=0 Apr 28 00:53:05.479345 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 00:53:05.479353 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 00:53:05.479361 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 00:53:05.479369 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 00:53:05.479377 kernel: audit: type=1403 audit(1777337575.963:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 00:53:05.479405 systemd[1]: Successfully loaded SELinux policy in 131.790ms. Apr 28 00:53:05.479509 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 130.157ms. Apr 28 00:53:05.479519 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:53:05.479529 systemd[1]: Detected virtualization kvm. Apr 28 00:53:05.479537 systemd[1]: Detected architecture x86-64. Apr 28 00:53:05.479545 systemd[1]: Detected first boot. Apr 28 00:53:05.479553 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:53:05.479562 zram_generator::config[1075]: No configuration found. Apr 28 00:53:05.480981 systemd[1]: Populated /etc with preset unit settings. Apr 28 00:53:05.481153 systemd[1]: Queued start job for default target multi-user.target. Apr 28 00:53:05.481164 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 00:53:05.481173 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 00:53:05.481182 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 00:53:05.481190 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 00:53:05.481198 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 00:53:05.481206 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 00:53:05.481214 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 00:53:05.481241 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 00:53:05.481250 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 00:53:05.481258 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:53:05.481267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:53:05.481276 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 00:53:05.481285 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 00:53:05.481294 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 00:53:05.481302 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:53:05.481310 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 00:53:05.481336 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:53:05.481345 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 00:53:05.481353 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:53:05.481380 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:53:05.481389 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:53:05.481397 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:53:05.481405 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 00:53:05.481412 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 00:53:05.481522 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 00:53:05.481531 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 00:53:05.481540 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:53:05.481548 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:53:05.481556 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:53:05.481564 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 00:53:05.481572 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 00:53:05.481580 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 00:53:05.481589 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 00:53:05.481597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:53:05.481623 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 00:53:05.481631 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 00:53:05.481640 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 00:53:05.481647 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 00:53:05.481656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:53:05.481664 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:53:05.481690 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 00:53:05.481698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:53:05.481723 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:53:05.481732 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:53:05.481740 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 00:53:05.481748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:53:05.481757 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 00:53:05.481765 kernel: fuse: init (API version 7.39) Apr 28 00:53:05.481802 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 28 00:53:05.481812 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 28 00:53:05.481838 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:53:05.481846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:53:05.481854 kernel: ACPI: bus type drm_connector registered Apr 28 00:53:05.481862 kernel: loop: module loaded Apr 28 00:53:05.481870 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 00:53:05.481878 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 00:53:05.481889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:53:05.481898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:53:05.481951 systemd-journald[1175]: Collecting audit messages is disabled. Apr 28 00:53:05.481993 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 00:53:05.482002 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 00:53:05.482027 systemd-journald[1175]: Journal started Apr 28 00:53:05.482066 systemd-journald[1175]: Runtime Journal (/run/log/journal/943e2ad6079140b2bd4d204214d9158c) is 6.0M, max 48.4M, 42.3M free. Apr 28 00:53:05.493583 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:53:05.505649 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 00:53:05.508855 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 00:53:05.516172 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 00:53:05.524710 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 00:53:05.527796 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 00:53:05.559175 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:53:05.716897 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 00:53:05.718842 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 00:53:05.721904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:53:05.722092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:53:05.733524 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:53:05.734604 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:53:05.741938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:53:05.743200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:53:05.759012 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 00:53:05.759233 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 00:53:05.796035 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:53:05.797259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:53:05.900978 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:53:05.975870 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 00:53:05.986027 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 00:53:06.054333 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:53:06.199295 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 00:53:06.210708 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 00:53:06.218344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 00:53:06.230276 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 00:53:06.261136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 00:53:06.269341 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:53:06.298349 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 00:53:06.304305 systemd-journald[1175]: Time spent on flushing to /var/log/journal/943e2ad6079140b2bd4d204214d9158c is 122.291ms for 941 entries. Apr 28 00:53:06.304305 systemd-journald[1175]: System Journal (/var/log/journal/943e2ad6079140b2bd4d204214d9158c) is 8.0M, max 195.6M, 187.6M free. Apr 28 00:53:06.530903 systemd-journald[1175]: Received client request to flush runtime journal. Apr 28 00:53:06.342338 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:53:06.355087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:53:06.389814 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:53:06.416592 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:53:06.465770 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 00:53:06.476793 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 00:53:06.494814 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 00:53:06.510640 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 00:53:06.533705 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 00:53:06.566011 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 00:53:06.663799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:53:06.695016 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 28 00:53:06.845670 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Apr 28 00:53:06.845705 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Apr 28 00:53:06.971075 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:53:07.052213 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 00:53:07.754165 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 00:53:07.831380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:53:08.279047 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Apr 28 00:53:08.279110 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Apr 28 00:53:08.296113 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:53:13.317325 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 00:53:13.399614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:53:14.732236 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Apr 28 00:53:15.825682 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:53:15.848938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:53:16.001896 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 00:53:16.610890 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 00:53:16.615550 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 28 00:53:16.996132 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1260) Apr 28 00:53:17.231007 systemd-networkd[1250]: lo: Link UP Apr 28 00:53:17.231017 systemd-networkd[1250]: lo: Gained carrier Apr 28 00:53:17.233364 systemd-networkd[1250]: Enumeration completed Apr 28 00:53:17.233645 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:53:17.234488 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:53:17.234491 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:53:17.241345 systemd-networkd[1250]: eth0: Link UP Apr 28 00:53:17.241682 systemd-networkd[1250]: eth0: Gained carrier Apr 28 00:53:17.244793 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:53:17.247992 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 00:53:17.268031 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:53:17.289239 systemd-networkd[1250]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:53:17.336308 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 28 00:53:17.349611 kernel: ACPI: button: Power Button [PWRF] Apr 28 00:53:17.346197 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:53:17.535311 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 00:53:17.552700 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 00:53:17.552961 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 00:53:17.575051 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 00:53:18.038497 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 00:53:18.063654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:53:18.301225 systemd-networkd[1250]: eth0: Gained IPv6LL Apr 28 00:53:18.386332 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 00:53:19.727483 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 00:53:19.760773 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 00:53:20.229140 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:53:20.369501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:53:20.523163 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 00:53:20.531057 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:53:20.597722 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 00:53:21.030050 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:53:21.400820 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 00:53:21.491060 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 00:53:21.523279 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 00:53:21.525630 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:53:21.536773 systemd[1]: Reached target machines.target - Containers. Apr 28 00:53:21.668043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 00:53:21.731376 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 00:53:21.852317 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 00:53:21.879097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:53:21.989207 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 00:53:22.012382 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 00:53:22.043580 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 00:53:22.048029 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 00:53:22.147846 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 00:53:22.169860 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 00:53:22.175122 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 00:53:22.196561 kernel: loop0: detected capacity change from 0 to 228704 Apr 28 00:53:22.347352 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 00:53:22.542347 kernel: loop1: detected capacity change from 0 to 142488 Apr 28 00:53:22.975662 kernel: loop2: detected capacity change from 0 to 140768 Apr 28 00:53:23.895602 kernel: loop3: detected capacity change from 0 to 228704 Apr 28 00:53:24.204769 kernel: loop4: detected capacity change from 0 to 142488 Apr 28 00:53:24.449680 kernel: loop5: detected capacity change from 0 to 140768 Apr 28 00:53:24.613146 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 00:53:24.614284 (sd-merge)[1311]: Merged extensions into '/usr'. Apr 28 00:53:25.055634 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 00:53:25.055680 systemd[1]: Reloading... Apr 28 00:53:26.473325 zram_generator::config[1335]: No configuration found. Apr 28 00:53:27.253359 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 00:53:29.198130 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:53:31.143576 systemd[1]: Reloading finished in 6087 ms. Apr 28 00:53:31.321311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 00:53:31.332790 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 00:53:31.469801 systemd[1]: Starting ensure-sysext.service... Apr 28 00:53:31.537377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:53:31.575836 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Apr 28 00:53:31.575948 systemd[1]: Reloading... Apr 28 00:53:31.737566 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:53:31.746649 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 00:53:31.747970 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 00:53:31.748283 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 28 00:53:31.748344 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 28 00:53:31.798142 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:53:31.853485 systemd-tmpfiles[1383]: Skipping /boot Apr 28 00:53:31.941736 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:53:31.941914 systemd-tmpfiles[1383]: Skipping /boot Apr 28 00:53:31.946949 zram_generator::config[1410]: No configuration found. Apr 28 00:53:35.137200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:53:37.239235 systemd[1]: Reloading finished in 5661 ms. Apr 28 00:53:37.372541 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:53:37.582132 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:53:37.736124 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 00:53:37.746816 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 00:53:37.789380 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:53:37.867482 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 00:53:37.912883 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:53:37.913120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:53:37.939144 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:53:37.974820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:53:38.043556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:53:38.054658 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:53:38.056969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:53:38.076396 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 00:53:38.090975 augenrules[1482]: No rules Apr 28 00:53:38.092261 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 00:53:38.102210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:53:38.102778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:53:38.124103 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:53:38.135292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:53:38.135837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:53:38.149386 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:53:38.149820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:53:38.586364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:53:38.586736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:53:38.671361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:53:38.688240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:53:38.731254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:53:38.737958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:53:38.796928 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 00:53:38.801333 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:53:38.840816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:53:38.857763 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 00:53:38.897225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:53:38.897587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:53:38.929023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:53:38.929252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:53:38.934003 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:53:38.934323 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:53:38.978146 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 00:53:39.156203 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:53:39.169336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:53:39.215879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:53:39.250294 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:53:39.298988 systemd-resolved[1463]: Positive Trust Anchors: Apr 28 00:53:39.299004 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:53:39.299042 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:53:39.331240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:53:39.344704 systemd-resolved[1463]: Defaulting to hostname 'linux'. Apr 28 00:53:39.349601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:53:39.354628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:53:39.354813 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:53:39.354860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:53:39.370869 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:53:39.396315 systemd[1]: Finished ensure-sysext.service. Apr 28 00:53:39.446178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:53:39.452167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:53:39.515266 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:53:39.522002 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:53:39.598737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:53:39.599170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:53:39.782759 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:53:39.798361 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:53:40.305599 systemd[1]: Reached target network.target - Network. Apr 28 00:53:40.320296 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 00:53:40.326609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:53:40.341050 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:53:40.343948 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:53:40.649058 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 00:53:42.415945 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 00:53:43.541702 systemd-timesyncd[1527]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 00:53:43.541764 systemd-resolved[1463]: Clock change detected. Flushing caches. Apr 28 00:53:43.542016 systemd-timesyncd[1527]: Initial clock synchronization to Tue 2026-04-28 00:53:43.538344 UTC. Apr 28 00:53:43.581116 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:53:43.589989 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 00:53:43.608407 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 00:53:43.614929 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 00:53:43.694342 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 00:53:43.703544 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:53:43.787619 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 00:53:43.813019 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 00:53:43.832684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 00:53:43.846004 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:53:44.014109 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 00:53:44.087209 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 00:53:44.272065 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 00:53:44.311789 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 00:53:44.362573 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:53:44.370278 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:53:44.397355 systemd[1]: System is tainted: cgroupsv1 Apr 28 00:53:44.438147 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:53:44.440478 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:53:44.502175 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 00:53:44.546936 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 00:53:44.581644 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 00:53:44.594234 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 00:53:44.667153 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 00:53:44.676762 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 00:53:44.700137 jq[1535]: false Apr 28 00:53:44.704756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:53:44.744566 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 00:53:44.780884 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 00:53:44.838612 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 00:53:44.851600 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 00:53:44.853284 extend-filesystems[1537]: Found loop3 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found loop4 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found loop5 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found sr0 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found vda Apr 28 00:53:44.853284 extend-filesystems[1537]: Found vda1 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found vda2 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found vda3 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found usr Apr 28 00:53:44.853284 extend-filesystems[1537]: Found vda4 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found vda6 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found vda7 Apr 28 00:53:44.853284 extend-filesystems[1537]: Found vda9 Apr 28 00:53:44.853284 extend-filesystems[1537]: Checking size of /dev/vda9 Apr 28 00:53:44.965539 dbus-daemon[1534]: [system] SELinux support is enabled Apr 28 00:53:44.981140 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 00:53:45.014312 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 00:53:45.027756 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 00:53:45.028911 extend-filesystems[1537]: Resized partition /dev/vda9 Apr 28 00:53:45.029235 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 00:53:45.049267 extend-filesystems[1565]: resize2fs 1.47.1 (20-May-2024) Apr 28 00:53:45.058089 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 00:53:45.111709 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 00:53:45.159786 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 00:53:45.252486 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 00:53:45.254803 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 00:53:45.356272 jq[1568]: true Apr 28 00:53:45.255002 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 00:53:45.387864 extend-filesystems[1565]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 00:53:45.387864 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 00:53:45.387864 extend-filesystems[1565]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 00:53:45.283370 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 00:53:45.478033 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Apr 28 00:53:45.309946 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 00:53:45.382294 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 00:53:45.432959 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 00:53:45.433341 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 00:53:45.459976 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 00:53:45.460254 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 00:53:45.540613 jq[1584]: true Apr 28 00:53:45.627820 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1583) Apr 28 00:53:45.595127 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 00:53:45.754615 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 00:53:45.754946 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 00:53:45.845394 systemd-logind[1560]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 00:53:45.845505 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 00:53:45.848609 systemd-logind[1560]: New seat seat0. Apr 28 00:53:45.849527 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 00:53:45.856146 tar[1580]: linux-amd64/LICENSE Apr 28 00:53:45.875590 tar[1580]: linux-amd64/helm Apr 28 00:53:45.985250 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 00:53:46.044260 update_engine[1562]: I20260428 00:53:45.891798 1562 main.cc:92] Flatcar Update Engine starting Apr 28 00:53:45.985745 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 00:53:45.985886 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 00:53:45.990812 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 00:53:45.990956 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 00:53:46.106322 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 00:53:46.124911 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:53:46.184476 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 00:53:46.204607 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 00:53:46.263738 systemd[1]: Started update-engine.service - Update Engine. Apr 28 00:53:46.561416 update_engine[1562]: I20260428 00:53:46.361905 1562 update_check_scheduler.cc:74] Next update check in 9m53s Apr 28 00:53:46.562508 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 00:53:46.614548 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 00:53:46.762774 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 00:53:46.943300 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 00:53:47.197530 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 00:53:47.197826 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 00:53:47.614283 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 00:53:48.192302 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 00:53:48.313781 locksmithd[1636]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 00:53:48.345060 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 00:53:48.527540 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 00:53:48.561144 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 00:53:49.525839 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 00:53:49.560614 containerd[1586]: time="2026-04-28T00:53:49.556094971Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 00:53:49.582210 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:36346.service - OpenSSH per-connection server daemon (10.0.0.1:36346). Apr 28 00:53:50.161742 containerd[1586]: time="2026-04-28T00:53:50.161476731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:53:50.208791 containerd[1586]: time="2026-04-28T00:53:50.202679634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:53:50.208791 containerd[1586]: time="2026-04-28T00:53:50.202855557Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 00:53:50.208791 containerd[1586]: time="2026-04-28T00:53:50.202986218Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 00:53:50.208791 containerd[1586]: time="2026-04-28T00:53:50.203765945Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 00:53:50.208791 containerd[1586]: time="2026-04-28T00:53:50.203822140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 00:53:50.208791 containerd[1586]: time="2026-04-28T00:53:50.204006720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:53:50.208791 containerd[1586]: time="2026-04-28T00:53:50.204025095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:53:50.366407 containerd[1586]: time="2026-04-28T00:53:50.312132878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:53:50.366407 containerd[1586]: time="2026-04-28T00:53:50.312731357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 00:53:50.366407 containerd[1586]: time="2026-04-28T00:53:50.312786262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:53:50.366407 containerd[1586]: time="2026-04-28T00:53:50.312849413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 00:53:50.372845 containerd[1586]: time="2026-04-28T00:53:50.371322015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:53:50.429055 containerd[1586]: time="2026-04-28T00:53:50.425745674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:53:50.440778 containerd[1586]: time="2026-04-28T00:53:50.440362638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:53:50.442515 containerd[1586]: time="2026-04-28T00:53:50.441498942Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 00:53:50.442515 containerd[1586]: time="2026-04-28T00:53:50.441827929Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 00:53:50.442515 containerd[1586]: time="2026-04-28T00:53:50.441964122Z" level=info msg="metadata content store policy set" policy=shared Apr 28 00:53:50.661665 containerd[1586]: time="2026-04-28T00:53:50.661126909Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 00:53:51.201217 containerd[1586]: time="2026-04-28T00:53:50.665772615Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 00:53:51.201217 containerd[1586]: time="2026-04-28T00:53:50.665992712Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 00:53:51.201217 containerd[1586]: time="2026-04-28T00:53:50.666074373Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 00:53:51.201217 containerd[1586]: time="2026-04-28T00:53:50.666168866Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 00:53:51.216578 containerd[1586]: time="2026-04-28T00:53:51.199551989Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.249985055Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.250654862Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.250736719Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.250774103Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.250810528Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.250970080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.251049629Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.251125169Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.251179262Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.251252201Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.251380508Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.251398276Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.251608236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.260135 containerd[1586]: time="2026-04-28T00:53:51.251661745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.251697599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.251736874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.260332304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.272290598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.277357175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.277564936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.277696242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.282398484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.282563979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.282636709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.282725192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.282805626Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.284693350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.286504237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287526 containerd[1586]: time="2026-04-28T00:53:51.286582873Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 00:53:51.287981 containerd[1586]: time="2026-04-28T00:53:51.286970198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 00:53:51.287981 containerd[1586]: time="2026-04-28T00:53:51.287118438Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 00:53:51.287981 containerd[1586]: time="2026-04-28T00:53:51.287147359Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 00:53:51.287981 containerd[1586]: time="2026-04-28T00:53:51.287194858Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 00:53:51.287981 containerd[1586]: time="2026-04-28T00:53:51.287208357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.287981 containerd[1586]: time="2026-04-28T00:53:51.287329044Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 00:53:51.287981 containerd[1586]: time="2026-04-28T00:53:51.287470511Z" level=info msg="NRI interface is disabled by configuration." Apr 28 00:53:51.287981 containerd[1586]: time="2026-04-28T00:53:51.287488326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 00:53:51.293540 containerd[1586]: time="2026-04-28T00:53:51.290649249Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 00:53:51.376966 containerd[1586]: time="2026-04-28T00:53:51.297909342Z" level=info msg="Connect containerd service" Apr 28 00:53:51.376966 containerd[1586]: time="2026-04-28T00:53:51.310635558Z" level=info msg="using legacy CRI server" Apr 28 00:53:51.376966 containerd[1586]: time="2026-04-28T00:53:51.316900897Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 00:53:51.394130 containerd[1586]: time="2026-04-28T00:53:51.393784522Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 00:53:51.419323 containerd[1586]: time="2026-04-28T00:53:51.416927625Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:53:51.437169 containerd[1586]: time="2026-04-28T00:53:51.433950356Z" level=info msg="Start subscribing containerd event" Apr 28 00:53:51.437169 containerd[1586]: time="2026-04-28T00:53:51.434629710Z" level=info msg="Start recovering state" Apr 28 00:53:51.437169 containerd[1586]: time="2026-04-28T00:53:51.435117184Z" level=info msg="Start event monitor" Apr 28 00:53:51.437169 containerd[1586]: time="2026-04-28T00:53:51.435214235Z" level=info msg="Start snapshots syncer" Apr 28 00:53:51.437169 containerd[1586]: time="2026-04-28T00:53:51.435240456Z" level=info msg="Start cni network conf syncer for default" Apr 28 00:53:51.437169 containerd[1586]: time="2026-04-28T00:53:51.435261591Z" level=info msg="Start streaming server" Apr 28 00:53:51.437169 containerd[1586]: time="2026-04-28T00:53:51.436812279Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 00:53:51.437169 containerd[1586]: time="2026-04-28T00:53:51.436887833Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 00:53:51.437169 containerd[1586]: time="2026-04-28T00:53:51.437088476Z" level=info msg="containerd successfully booted in 1.944032s" Apr 28 00:53:51.443057 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 00:53:51.446885 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 36346 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:53:51.513349 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:51.840208 systemd-logind[1560]: New session 1 of user core. Apr 28 00:53:51.840608 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 00:53:51.941126 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 00:53:52.466806 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 00:53:53.281857 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1058902793 wd_nsec: 1058902377 Apr 28 00:53:53.329616 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 00:53:53.512978 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 00:53:54.793883 tar[1580]: linux-amd64/README.md Apr 28 00:53:54.856529 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 00:53:55.234002 systemd[1673]: Queued start job for default target default.target. Apr 28 00:53:55.237221 systemd[1673]: Created slice app.slice - User Application Slice. Apr 28 00:53:55.237258 systemd[1673]: Reached target paths.target - Paths. Apr 28 00:53:55.237271 systemd[1673]: Reached target timers.target - Timers. Apr 28 00:53:55.267215 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 00:53:55.607343 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 00:53:55.607589 systemd[1673]: Reached target sockets.target - Sockets. Apr 28 00:53:55.607646 systemd[1673]: Reached target basic.target - Basic System. Apr 28 00:53:55.607705 systemd[1673]: Reached target default.target - Main User Target. Apr 28 00:53:55.607770 systemd[1673]: Startup finished in 1.934s. Apr 28 00:53:55.646150 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 00:53:55.698599 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 00:53:56.150801 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:36360.service - OpenSSH per-connection server daemon (10.0.0.1:36360). Apr 28 00:53:56.651379 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 36360 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:53:56.707863 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:56.848933 systemd-logind[1560]: New session 2 of user core. Apr 28 00:53:57.214210 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 00:53:57.612144 sshd[1694]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:57.643670 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:36360.service: Deactivated successfully. Apr 28 00:53:57.697233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:53:57.699155 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 00:53:57.707004 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Apr 28 00:53:57.801591 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:53:57.808012 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 00:53:57.830884 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:36374.service - OpenSSH per-connection server daemon (10.0.0.1:36374). Apr 28 00:53:57.832690 systemd[1]: Startup finished in 29.499s (kernel) + 1min 883ms (userspace) = 1min 30.383s. Apr 28 00:53:57.833888 systemd-logind[1560]: Removed session 2. Apr 28 00:53:58.081928 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 36374 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:53:58.091193 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:58.343370 systemd-logind[1560]: New session 3 of user core. Apr 28 00:53:58.383520 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 00:53:58.948287 sshd[1711]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:58.951735 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:36374.service: Deactivated successfully. Apr 28 00:53:59.013267 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 00:53:59.141322 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Apr 28 00:53:59.995274 systemd-logind[1560]: Removed session 3. Apr 28 00:54:09.296310 kubelet[1707]: E0428 00:54:09.295774 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:54:09.330732 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:38412.service - OpenSSH per-connection server daemon (10.0.0.1:38412). Apr 28 00:54:09.359782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:54:09.361311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:54:09.918047 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 38412 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:54:09.951003 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:10.517780 systemd-logind[1560]: New session 4 of user core. Apr 28 00:54:10.597617 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 00:54:11.363350 sshd[1725]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:11.561178 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:45636.service - OpenSSH per-connection server daemon (10.0.0.1:45636). Apr 28 00:54:11.562018 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:38412.service: Deactivated successfully. Apr 28 00:54:11.654470 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 00:54:11.697190 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Apr 28 00:54:11.966319 systemd-logind[1560]: Removed session 4. Apr 28 00:54:12.176506 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 45636 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:54:12.313919 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:12.652538 systemd-logind[1560]: New session 5 of user core. Apr 28 00:54:12.691149 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 00:54:13.361977 sshd[1732]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:13.597613 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:45652.service - OpenSSH per-connection server daemon (10.0.0.1:45652). Apr 28 00:54:13.606309 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:45636.service: Deactivated successfully. Apr 28 00:54:13.797321 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 00:54:13.961134 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Apr 28 00:54:14.261089 systemd-logind[1560]: Removed session 5. Apr 28 00:54:15.459227 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:54:15.465634 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:17.131902 systemd-logind[1560]: New session 6 of user core. Apr 28 00:54:17.158650 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 00:54:18.395705 sshd[1740]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:18.461722 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:45664.service - OpenSSH per-connection server daemon (10.0.0.1:45664). Apr 28 00:54:18.558581 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:45652.service: Deactivated successfully. Apr 28 00:54:18.662861 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 00:54:18.712688 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Apr 28 00:54:18.848131 systemd-logind[1560]: Removed session 6. Apr 28 00:54:19.109096 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 45664 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 00:54:19.322401 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:19.479374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 00:54:19.803723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:54:19.841648 systemd-logind[1560]: New session 7 of user core. Apr 28 00:54:19.851779 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 00:54:20.613941 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 00:54:20.614515 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:54:28.361071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:54:28.447596 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:54:28.806690 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 00:54:28.863301 (dockerd)[1792]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 00:54:31.489702 update_engine[1562]: I20260428 00:54:31.480255 1562 update_attempter.cc:509] Updating boot flags... Apr 28 00:54:33.268038 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1808) Apr 28 00:54:33.872709 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1807) Apr 28 00:54:34.175271 kubelet[1785]: E0428 00:54:34.118342 1785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:54:34.191343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:54:34.192144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:54:36.818615 dockerd[1792]: time="2026-04-28T00:54:36.808304507Z" level=info msg="Starting up" Apr 28 00:54:39.950242 dockerd[1792]: time="2026-04-28T00:54:39.949336825Z" level=info msg="Loading containers: start." Apr 28 00:54:44.006791 kernel: Initializing XFRM netlink socket Apr 28 00:54:44.254642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 00:54:44.289823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:54:45.972510 systemd-networkd[1250]: docker0: Link UP Apr 28 00:54:46.468670 dockerd[1792]: time="2026-04-28T00:54:46.466120030Z" level=info msg="Loading containers: done." Apr 28 00:54:46.969356 dockerd[1792]: time="2026-04-28T00:54:46.968858448Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 00:54:46.980164 dockerd[1792]: time="2026-04-28T00:54:46.978416934Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 00:54:46.981325 dockerd[1792]: time="2026-04-28T00:54:46.981265588Z" level=info msg="Daemon has completed initialization" Apr 28 00:54:46.984410 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3468354090-merged.mount: Deactivated successfully. Apr 28 00:54:47.801994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:54:47.881081 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:54:49.466354 dockerd[1792]: time="2026-04-28T00:54:49.459640409Z" level=info msg="API listen on /run/docker.sock" Apr 28 00:54:49.486526 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 00:54:52.762290 kubelet[1943]: E0428 00:54:52.761866 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:54:52.786010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:54:52.818342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:55:03.132415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 00:55:03.177660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:55:03.797950 containerd[1586]: time="2026-04-28T00:55:03.797083776Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 28 00:55:07.978074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:55:08.085960 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:55:09.015574 kubelet[1995]: E0428 00:55:09.015121 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:55:09.080541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:55:09.081268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:55:09.458534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583112136.mount: Deactivated successfully. Apr 28 00:55:19.399605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 28 00:55:19.509566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:55:22.252240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:55:22.290931 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:55:23.188083 kubelet[2063]: E0428 00:55:23.186207 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:55:23.215168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:55:23.220933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:55:28.465715 containerd[1586]: time="2026-04-28T00:55:28.465155048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:55:28.482227 containerd[1586]: time="2026-04-28T00:55:28.475198580Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 28 00:55:28.485477 containerd[1586]: time="2026-04-28T00:55:28.485369707Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:55:28.664970 containerd[1586]: time="2026-04-28T00:55:28.664227617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:55:28.754196 containerd[1586]: time="2026-04-28T00:55:28.743359793Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 24.945625549s" Apr 28 00:55:28.754196 containerd[1586]: time="2026-04-28T00:55:28.744799366Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 28 00:55:28.822211 containerd[1586]: time="2026-04-28T00:55:28.821625878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 28 00:55:33.467592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 28 00:55:33.660361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:55:35.829745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:55:35.853488 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:55:36.720365 kubelet[2100]: E0428 00:55:36.720016 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:55:36.730897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:55:36.733944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:55:40.090461 containerd[1586]: time="2026-04-28T00:55:40.086085762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:55:40.101634 containerd[1586]: time="2026-04-28T00:55:40.096056609Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 28 00:55:40.264742 containerd[1586]: time="2026-04-28T00:55:40.264141657Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:55:40.768383 containerd[1586]: time="2026-04-28T00:55:40.767759303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:55:40.804105 containerd[1586]: time="2026-04-28T00:55:40.801602163Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 11.979654019s" Apr 28 00:55:40.804105 containerd[1586]: time="2026-04-28T00:55:40.802763581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 28 00:55:40.806802 containerd[1586]: time="2026-04-28T00:55:40.806748975Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 28 00:55:46.964311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 28 00:55:47.084109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:55:47.462470 containerd[1586]: time="2026-04-28T00:55:47.461397095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:55:47.474592 containerd[1586]: time="2026-04-28T00:55:47.465328836Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 28 00:55:47.490216 containerd[1586]: time="2026-04-28T00:55:47.490058171Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:55:47.580757 containerd[1586]: time="2026-04-28T00:55:47.579944897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:55:47.657357 containerd[1586]: time="2026-04-28T00:55:47.654657190Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 6.846526689s" Apr 28 00:55:47.661281 containerd[1586]: time="2026-04-28T00:55:47.658109754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 28 00:55:48.096576 containerd[1586]: time="2026-04-28T00:55:48.096337425Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 28 00:55:48.896399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:55:48.952951 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:55:49.695263 kubelet[2124]: E0428 00:55:49.694671 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:55:49.760025 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:55:49.761833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:56:00.189086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 28 00:56:00.410450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:56:03.280665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:56:03.303395 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:56:03.352229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261436822.mount: Deactivated successfully. Apr 28 00:56:04.458342 kubelet[2149]: E0428 00:56:04.457779 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:56:04.475187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:56:04.516876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:56:09.260121 containerd[1586]: time="2026-04-28T00:56:09.257459094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:56:09.281249 containerd[1586]: time="2026-04-28T00:56:09.277187338Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 28 00:56:09.333987 containerd[1586]: time="2026-04-28T00:56:09.332926872Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:56:09.388442 containerd[1586]: time="2026-04-28T00:56:09.386713607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:56:09.397014 containerd[1586]: time="2026-04-28T00:56:09.396613787Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 21.300202314s" Apr 28 00:56:09.397014 containerd[1586]: time="2026-04-28T00:56:09.396670101Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 28 00:56:09.809465 containerd[1586]: time="2026-04-28T00:56:09.713881334Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 28 00:56:14.707727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 28 00:56:14.729718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:56:14.734034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256173564.mount: Deactivated successfully. Apr 28 00:56:16.787625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:56:16.830171 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:56:18.260083 kubelet[2189]: E0428 00:56:18.259186 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:56:18.349355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:56:18.354956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:56:28.465056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 28 00:56:28.623756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:56:34.360639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:56:34.445180 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:56:38.835488 kubelet[2251]: E0428 00:56:38.831781 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:56:38.865001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:56:38.964806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:56:44.902165 containerd[1586]: time="2026-04-28T00:56:44.892895404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:56:44.971208 containerd[1586]: time="2026-04-28T00:56:44.951024488Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 28 00:56:45.203111 containerd[1586]: time="2026-04-28T00:56:45.198297511Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:56:46.499110 containerd[1586]: time="2026-04-28T00:56:46.498332934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:56:47.462095 containerd[1586]: time="2026-04-28T00:56:47.461538180Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 37.652130098s" Apr 28 00:56:47.462095 containerd[1586]: time="2026-04-28T00:56:47.461800281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 28 00:56:47.696992 containerd[1586]: time="2026-04-28T00:56:47.571214686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 28 00:56:49.710742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 28 00:56:50.106287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:56:55.416606 containerd[1586]: time="2026-04-28T00:56:55.413394920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:56:55.414209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099580753.mount: Deactivated successfully. Apr 28 00:56:55.476394 containerd[1586]: time="2026-04-28T00:56:55.471470413Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 28 00:56:55.650242 containerd[1586]: time="2026-04-28T00:56:55.649878425Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:56:56.100364 containerd[1586]: time="2026-04-28T00:56:56.092375958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:56:56.431830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:56:56.495592 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:56:56.547338 containerd[1586]: time="2026-04-28T00:56:56.496367551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 8.904098384s" Apr 28 00:56:56.547338 containerd[1586]: time="2026-04-28T00:56:56.496512607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 28 00:56:56.592933 containerd[1586]: time="2026-04-28T00:56:56.592667521Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 28 00:56:59.103282 kubelet[2275]: E0428 00:56:59.102496 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:56:59.156918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:56:59.193057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:57:06.746704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715731863.mount: Deactivated successfully. Apr 28 00:57:09.363067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 28 00:57:09.819124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:57:17.266363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:57:17.537923 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:57:26.242165 kubelet[2309]: E0428 00:57:26.241719 2309 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:57:26.353211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:57:26.411275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:57:36.666872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 28 00:57:36.797189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:57:43.389567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:57:43.473164 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:57:45.816535 kubelet[2331]: E0428 00:57:45.815988 2331 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:57:45.886637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:57:45.887090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:57:56.041349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 28 00:57:56.278823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:58:00.120257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:58:00.154775 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:58:01.357743 kubelet[2387]: E0428 00:58:01.324314 2387 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:58:01.369388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:58:01.369874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:58:08.975391 containerd[1586]: time="2026-04-28T00:58:08.970080661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:58:09.111094 containerd[1586]: time="2026-04-28T00:58:08.996306797Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 28 00:58:09.111094 containerd[1586]: time="2026-04-28T00:58:08.999294798Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:58:09.443748 containerd[1586]: time="2026-04-28T00:58:09.439891087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:58:09.549219 containerd[1586]: time="2026-04-28T00:58:09.548098039Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1m12.955096309s" Apr 28 00:58:09.549219 containerd[1586]: time="2026-04-28T00:58:09.548272463Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 28 00:58:11.618275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 28 00:58:11.968543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:58:15.059779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:58:15.162301 (kubelet)[2446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:58:17.202962 kubelet[2446]: E0428 00:58:17.195001 2446 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:58:17.229150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:58:17.237641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:58:27.715224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 28 00:58:28.316571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:58:32.840802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:58:33.008841 (kubelet)[2475]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:58:34.851159 kubelet[2475]: E0428 00:58:34.849710 2475 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:58:34.932748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:58:34.960766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:58:45.021290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 28 00:58:45.358037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:58:50.667484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:58:50.808027 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:58:53.737285 kubelet[2497]: E0428 00:58:53.734778 2497 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:58:53.766988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:58:53.817217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:58:57.017404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:58:58.699790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:59:00.790753 systemd[1]: Reloading requested from client PID 2514 ('systemctl') (unit session-7.scope)... Apr 28 00:59:00.790858 systemd[1]: Reloading... Apr 28 00:59:02.251916 zram_generator::config[2553]: No configuration found. Apr 28 00:59:11.674770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:59:17.352725 systemd[1]: Reloading finished in 16556 ms. Apr 28 00:59:19.785180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:59:19.963086 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:59:19.964043 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:59:19.964415 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:59:19.964787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:59:20.217373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:59:29.355414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:59:29.485048 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:59:31.835409 kubelet[2617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:59:31.835409 kubelet[2617]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 00:59:31.835409 kubelet[2617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:59:31.939814 kubelet[2617]: I0428 00:59:31.912983 2617 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 00:59:43.047817 kubelet[2617]: I0428 00:59:43.047385 2617 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 00:59:43.047817 kubelet[2617]: I0428 00:59:43.047614 2617 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:59:43.074613 kubelet[2617]: I0428 00:59:43.053152 2617 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 00:59:43.719055 kubelet[2617]: E0428 00:59:43.717772 2617 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:59:43.726706 kubelet[2617]: I0428 00:59:43.726517 2617 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:59:43.923811 kubelet[2617]: E0428 00:59:43.923413 2617 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:59:43.923811 kubelet[2617]: I0428 00:59:43.923671 2617 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 00:59:44.246819 kubelet[2617]: I0428 00:59:44.245257 2617 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 00:59:44.260850 kubelet[2617]: I0428 00:59:44.260576 2617 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:59:44.261769 kubelet[2617]: I0428 00:59:44.260875 2617 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 28 00:59:44.262086 kubelet[2617]: I0428 00:59:44.261841 2617 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 00:59:44.262086 kubelet[2617]: I0428 00:59:44.261860 2617 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 00:59:44.271785 kubelet[2617]: I0428 00:59:44.271517 2617 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:59:44.310220 kubelet[2617]: I0428 00:59:44.309275 2617 kubelet.go:480] "Attempting to sync node with API server" Apr 28 00:59:44.310220 kubelet[2617]: I0428 00:59:44.309871 2617 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:59:44.313517 kubelet[2617]: I0428 00:59:44.311838 2617 kubelet.go:386] "Adding apiserver pod source" Apr 28 00:59:44.313517 kubelet[2617]: I0428 00:59:44.312180 2617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:59:44.378557 kubelet[2617]: E0428 00:59:44.374631 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:59:44.394510 kubelet[2617]: E0428 00:59:44.382476 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:59:44.402075 kubelet[2617]: I0428 00:59:44.402000 2617 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:59:44.410138 kubelet[2617]: I0428 00:59:44.410019 2617 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:59:44.410833 kubelet[2617]: W0428 00:59:44.410778 2617 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 00:59:44.505699 kubelet[2617]: I0428 00:59:44.502923 2617 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 00:59:44.506554 kubelet[2617]: I0428 00:59:44.506004 2617 server.go:1289] "Started kubelet" Apr 28 00:59:44.508068 kubelet[2617]: I0428 00:59:44.506765 2617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:59:44.521175 kubelet[2617]: I0428 00:59:44.521022 2617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 00:59:44.522049 kubelet[2617]: I0428 00:59:44.520122 2617 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:59:44.522586 kubelet[2617]: I0428 00:59:44.522527 2617 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:59:44.594610 kubelet[2617]: I0428 00:59:44.592748 2617 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:59:44.597067 kubelet[2617]: E0428 00:59:44.594781 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:44.597067 kubelet[2617]: I0428 00:59:44.594843 2617 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 00:59:44.597067 kubelet[2617]: E0428 00:59:44.593571 2617 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5f7aec3fb9d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,LastTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:44.597067 kubelet[2617]: I0428 00:59:44.595048 2617 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 00:59:44.600269 kubelet[2617]: I0428 00:59:44.597198 2617 reconciler.go:26] "Reconciler: start to sync state" Apr 28 00:59:44.600269 kubelet[2617]: I0428 00:59:44.598167 2617 server.go:317] "Adding debug handlers to kubelet server" Apr 28 00:59:44.600269 kubelet[2617]: E0428 00:59:44.598189 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Apr 28 00:59:44.611793 kubelet[2617]: E0428 00:59:44.611580 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:59:44.615979 kubelet[2617]: I0428 00:59:44.615721 2617 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:59:44.615979 kubelet[2617]: I0428 00:59:44.615863 2617 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:59:44.628109 kubelet[2617]: E0428 00:59:44.627981 2617 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:59:44.639599 kubelet[2617]: I0428 00:59:44.639479 2617 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:59:44.709688 kubelet[2617]: E0428 00:59:44.704986 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:44.719598 kubelet[2617]: I0428 00:59:44.719372 2617 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 00:59:44.792883 kubelet[2617]: I0428 00:59:44.790141 2617 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 00:59:44.800286 kubelet[2617]: I0428 00:59:44.795180 2617 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 00:59:44.806962 kubelet[2617]: I0428 00:59:44.806923 2617 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:59:44.807084 kubelet[2617]: I0428 00:59:44.806997 2617 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 00:59:44.807219 kubelet[2617]: E0428 00:59:44.807089 2617 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:59:44.808482 kubelet[2617]: E0428 00:59:44.808409 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Apr 28 00:59:44.822083 kubelet[2617]: E0428 00:59:44.821589 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:44.829055 kubelet[2617]: E0428 00:59:44.822800 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:59:44.914782 kubelet[2617]: E0428 00:59:44.913042 2617 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:59:44.953931 kubelet[2617]: E0428 00:59:44.953127 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:45.101302 kubelet[2617]: E0428 00:59:45.063750 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:45.118657 kubelet[2617]: E0428 00:59:45.116874 2617 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:59:45.181398 kubelet[2617]: E0428 00:59:45.180559 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:45.264485 kubelet[2617]: E0428 00:59:45.264075 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:59:45.264485 kubelet[2617]: E0428 00:59:45.264229 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Apr 28 00:59:45.289269 kubelet[2617]: E0428 00:59:45.287097 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:45.402766 kubelet[2617]: E0428 00:59:45.393968 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:45.505925 kubelet[2617]: E0428 00:59:45.500403 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:45.576622 kubelet[2617]: E0428 00:59:45.576013 2617 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 00:59:45.588180 kubelet[2617]: E0428 00:59:45.585860 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:59:45.604367 kubelet[2617]: I0428 00:59:45.603553 2617 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 00:59:45.614716 kubelet[2617]: I0428 00:59:45.606934 2617 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 00:59:45.614716 kubelet[2617]: E0428 00:59:45.607017 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:45.614716 kubelet[2617]: I0428 00:59:45.612936 2617 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:59:45.633040 kubelet[2617]: I0428 00:59:45.632550 2617 policy_none.go:49] "None policy: Start" Apr 28 00:59:45.633040 kubelet[2617]: I0428 00:59:45.633029 2617 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 00:59:45.633040 kubelet[2617]: I0428 00:59:45.633105 2617 state_mem.go:35] "Initializing new in-memory state store" Apr 28 00:59:45.773545 kubelet[2617]: E0428 00:59:45.728201 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:59:45.789706 kubelet[2617]: E0428 00:59:45.789344 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:59:45.803754 kubelet[2617]: E0428 00:59:45.803615 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:59:45.812473 kubelet[2617]: E0428 00:59:45.812039 2617 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:59:45.841311 kubelet[2617]: I0428 00:59:45.835909 2617 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 00:59:45.841311 kubelet[2617]: I0428 00:59:45.836007 2617 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:59:45.848195 kubelet[2617]: I0428 00:59:45.848151 2617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 00:59:45.865546 kubelet[2617]: E0428 00:59:45.865075 2617 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:59:45.865546 kubelet[2617]: E0428 00:59:45.865389 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:59:45.983388 kubelet[2617]: E0428 00:59:45.982581 2617 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:59:46.015665 kubelet[2617]: I0428 00:59:46.015194 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:46.061404 kubelet[2617]: E0428 00:59:46.016563 2617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 28 00:59:46.270911 kubelet[2617]: E0428 00:59:46.270314 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="1.6s" Apr 28 00:59:46.298124 kubelet[2617]: I0428 00:59:46.295975 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:46.305643 kubelet[2617]: E0428 00:59:46.302064 2617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 28 00:59:46.461358 kubelet[2617]: I0428 00:59:46.459407 2617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04ce0b2223f493e56fe4c887c063836d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"04ce0b2223f493e56fe4c887c063836d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:59:46.461358 kubelet[2617]: I0428 00:59:46.461023 2617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04ce0b2223f493e56fe4c887c063836d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"04ce0b2223f493e56fe4c887c063836d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:59:46.461358 kubelet[2617]: I0428 00:59:46.461086 2617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04ce0b2223f493e56fe4c887c063836d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"04ce0b2223f493e56fe4c887c063836d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:59:46.683319 kubelet[2617]: I0428 00:59:46.680112 2617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:46.747911 kubelet[2617]: I0428 00:59:46.696789 2617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:46.762706 kubelet[2617]: I0428 00:59:46.762290 2617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:46.764201 kubelet[2617]: E0428 00:59:46.764039 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:59:46.764201 kubelet[2617]: I0428 00:59:46.764146 2617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:46.764293 kubelet[2617]: I0428 00:59:46.764191 2617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:59:46.801184 kubelet[2617]: I0428 00:59:46.800727 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:46.801184 kubelet[2617]: E0428 00:59:46.801058 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:46.839718 kubelet[2617]: E0428 00:59:46.801904 2617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 28 00:59:46.839718 kubelet[2617]: E0428 00:59:46.837975 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:59:46.885732 kubelet[2617]: I0428 00:59:46.882688 2617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 00:59:46.887316 containerd[1586]: time="2026-04-28T00:59:46.887034123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:04ce0b2223f493e56fe4c887c063836d,Namespace:kube-system,Attempt:0,}" Apr 28 00:59:46.973785 kubelet[2617]: E0428 00:59:46.971846 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:59:47.187235 kubelet[2617]: E0428 00:59:47.163920 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:47.209895 containerd[1586]: time="2026-04-28T00:59:47.207825835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 28 00:59:47.419210 kubelet[2617]: E0428 00:59:47.417069 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:47.465128 containerd[1586]: time="2026-04-28T00:59:47.464311891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 28 00:59:47.599735 kubelet[2617]: E0428 00:59:47.596360 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:59:47.797722 kubelet[2617]: I0428 00:59:47.796718 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:47.798536 kubelet[2617]: E0428 00:59:47.797891 2617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 28 00:59:47.909846 kubelet[2617]: E0428 00:59:47.909378 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:59:47.909846 kubelet[2617]: E0428 00:59:47.909493 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="3.2s" Apr 28 00:59:48.451662 kubelet[2617]: E0428 00:59:48.451399 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:59:48.830185 kubelet[2617]: E0428 00:59:48.829830 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:59:49.787018 kubelet[2617]: E0428 00:59:49.770789 2617 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5f7aec3fb9d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,LastTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:59:50.597408 kubelet[2617]: E0428 00:59:50.597255 2617 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:59:50.641544 kubelet[2617]: I0428 00:59:50.641300 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:50.649156 kubelet[2617]: E0428 00:59:50.642122 2617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 28 00:59:51.202034 kubelet[2617]: E0428 00:59:51.177517 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="6.4s" Apr 28 00:59:51.368504 kubelet[2617]: E0428 00:59:51.368099 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:59:51.665319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727879886.mount: Deactivated successfully. Apr 28 00:59:51.815480 containerd[1586]: time="2026-04-28T00:59:51.815023337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:59:51.869406 containerd[1586]: time="2026-04-28T00:59:51.868344190Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 00:59:51.890365 containerd[1586]: time="2026-04-28T00:59:51.890244993Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:59:51.891756 containerd[1586]: time="2026-04-28T00:59:51.891667031Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:59:51.892303 containerd[1586]: time="2026-04-28T00:59:51.892198601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:59:51.898148 containerd[1586]: time="2026-04-28T00:59:51.897772185Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:59:51.899841 containerd[1586]: time="2026-04-28T00:59:51.898472319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:59:51.915570 containerd[1586]: time="2026-04-28T00:59:51.914937290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:59:51.918409 containerd[1586]: time="2026-04-28T00:59:51.918129478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.030788984s" Apr 28 00:59:51.944538 containerd[1586]: time="2026-04-28T00:59:51.941777584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.730752836s" Apr 28 00:59:51.944538 containerd[1586]: time="2026-04-28T00:59:51.944113571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.451387992s" Apr 28 00:59:53.073083 kubelet[2617]: E0428 00:59:53.071618 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:59:53.212162 kubelet[2617]: E0428 00:59:53.211537 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:59:53.244775 containerd[1586]: time="2026-04-28T00:59:53.237320609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:59:53.244775 containerd[1586]: time="2026-04-28T00:59:53.238411290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:59:53.244775 containerd[1586]: time="2026-04-28T00:59:53.242385582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:59:53.288977 containerd[1586]: time="2026-04-28T00:59:53.247033292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:59:53.408218 kubelet[2617]: E0428 00:59:53.403253 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:59:53.428176 containerd[1586]: time="2026-04-28T00:59:53.422957180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:59:53.449313 containerd[1586]: time="2026-04-28T00:59:53.443134244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:59:53.449313 containerd[1586]: time="2026-04-28T00:59:53.443225766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:59:53.449313 containerd[1586]: time="2026-04-28T00:59:53.443947045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:59:53.449313 containerd[1586]: time="2026-04-28T00:59:53.424672675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:59:53.449313 containerd[1586]: time="2026-04-28T00:59:53.447050252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:59:53.449313 containerd[1586]: time="2026-04-28T00:59:53.447184271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:59:53.497546 containerd[1586]: time="2026-04-28T00:59:53.476951609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:59:54.228336 kubelet[2617]: I0428 00:59:54.227639 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 00:59:54.253501 kubelet[2617]: E0428 00:59:54.253003 2617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Apr 28 00:59:54.669578 containerd[1586]: time="2026-04-28T00:59:54.660638254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"f982d6cbbc6dc2f50514182e4e7425d1dedaf958dd13f701ace907043e423a19\"" Apr 28 00:59:54.669578 containerd[1586]: time="2026-04-28T00:59:54.661189895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ac400876d35f772f1fdc2675a5271f5dd8eb62cd0a69b024b6ed75bec901e74\"" Apr 28 00:59:54.686637 containerd[1586]: time="2026-04-28T00:59:54.670326662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:04ce0b2223f493e56fe4c887c063836d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9136ea2a5640920ef6885893b09c98861f1779d8f82af4e3c5078bb6a2d2cecf\"" Apr 28 00:59:54.691350 kubelet[2617]: E0428 00:59:54.691263 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:54.691708 kubelet[2617]: E0428 00:59:54.691463 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:54.691708 kubelet[2617]: E0428 00:59:54.691262 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:59:54.780786 containerd[1586]: time="2026-04-28T00:59:54.779489851Z" level=info msg="CreateContainer within sandbox \"2ac400876d35f772f1fdc2675a5271f5dd8eb62cd0a69b024b6ed75bec901e74\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 00:59:54.793872 containerd[1586]: time="2026-04-28T00:59:54.793482170Z" level=info msg="CreateContainer within sandbox \"9136ea2a5640920ef6885893b09c98861f1779d8f82af4e3c5078bb6a2d2cecf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 00:59:54.817148 containerd[1586]: time="2026-04-28T00:59:54.816788535Z" level=info msg="CreateContainer within sandbox \"f982d6cbbc6dc2f50514182e4e7425d1dedaf958dd13f701ace907043e423a19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 00:59:55.221178 containerd[1586]: time="2026-04-28T00:59:55.216837163Z" level=info msg="CreateContainer within sandbox \"9136ea2a5640920ef6885893b09c98861f1779d8f82af4e3c5078bb6a2d2cecf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e432efc4bff571e2b2da236c6b8a11c7272d76d5529eb1f87ec84497f13d0afb\"" Apr 28 00:59:55.227506 containerd[1586]: time="2026-04-28T00:59:55.227253705Z" level=info msg="CreateContainer within sandbox \"2ac400876d35f772f1fdc2675a5271f5dd8eb62cd0a69b024b6ed75bec901e74\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\"" Apr 28 00:59:55.229609 containerd[1586]: time="2026-04-28T00:59:55.229396201Z" level=info msg="StartContainer for \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\"" Apr 28 00:59:55.229874 containerd[1586]: time="2026-04-28T00:59:55.229845862Z" level=info msg="StartContainer for \"e432efc4bff571e2b2da236c6b8a11c7272d76d5529eb1f87ec84497f13d0afb\"" Apr 28 00:59:55.253494 containerd[1586]: time="2026-04-28T00:59:55.253216950Z" level=info msg="CreateContainer within sandbox \"f982d6cbbc6dc2f50514182e4e7425d1dedaf958dd13f701ace907043e423a19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68453dd1112c3f293ee3f372c3448266eb9dce79c196792b9a160992611e1fa3\"" Apr 28 00:59:55.254601 containerd[1586]: time="2026-04-28T00:59:55.254201091Z" level=info msg="StartContainer for \"68453dd1112c3f293ee3f372c3448266eb9dce79c196792b9a160992611e1fa3\"" Apr 28 00:59:55.997398 kubelet[2617]: E0428 00:59:55.920717 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:59:56.497925 containerd[1586]: time="2026-04-28T00:59:56.497546093Z" level=info msg="StartContainer for \"e432efc4bff571e2b2da236c6b8a11c7272d76d5529eb1f87ec84497f13d0afb\" returns successfully" Apr 28 00:59:56.663846 containerd[1586]: time="2026-04-28T00:59:56.662061995Z" level=info msg="StartContainer for \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\" returns successfully" Apr 28 00:59:57.593178 containerd[1586]: time="2026-04-28T00:59:57.591514376Z" level=info msg="StartContainer for \"68453dd1112c3f293ee3f372c3448266eb9dce79c196792b9a160992611e1fa3\" returns successfully" Apr 28 01:00:02.154705 kubelet[2617]: I0428 01:00:02.152928 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:00:02.154705 kubelet[2617]: E0428 01:00:02.154131 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:02.154705 kubelet[2617]: E0428 01:00:02.154469 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:03.844931 kubelet[2617]: E0428 01:00:03.842234 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:03.910705 kubelet[2617]: E0428 01:00:03.902857 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:04.221315 kubelet[2617]: E0428 01:00:04.196310 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:04.283319 kubelet[2617]: E0428 01:00:04.257839 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:05.207687 kubelet[2617]: E0428 01:00:05.207291 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:05.273114 kubelet[2617]: E0428 01:00:05.207951 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:05.273114 kubelet[2617]: E0428 01:00:05.207299 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:05.273114 kubelet[2617]: E0428 01:00:05.208249 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:05.273114 kubelet[2617]: E0428 01:00:05.222271 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:05.273114 kubelet[2617]: E0428 01:00:05.251564 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:06.117302 kubelet[2617]: E0428 01:00:06.113851 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:00:06.806982 kubelet[2617]: E0428 01:00:06.800397 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:06.955271 kubelet[2617]: E0428 01:00:06.954094 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:06.993563 kubelet[2617]: E0428 01:00:06.955297 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:07.402189 kubelet[2617]: E0428 01:00:07.401862 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:08.170374 kubelet[2617]: E0428 01:00:07.972244 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:00:09.103837 kubelet[2617]: E0428 01:00:09.101296 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:09.114559 kubelet[2617]: E0428 01:00:09.114142 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:10.277227 kubelet[2617]: E0428 01:00:10.275545 2617 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:00:10.305887 kubelet[2617]: E0428 01:00:10.275814 2617 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5f7aec3fb9d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,LastTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:00:10.317688 kubelet[2617]: E0428 01:00:10.317633 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:00:12.195230 kubelet[2617]: E0428 01:00:12.180070 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:12.316125 kubelet[2617]: E0428 01:00:12.287846 2617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:00:12.316125 kubelet[2617]: E0428 01:00:12.311603 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:13.365792 kubelet[2617]: E0428 01:00:13.364145 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:13.504173 kubelet[2617]: E0428 01:00:13.367636 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:13.647584 kubelet[2617]: E0428 01:00:13.639492 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:00:13.696083 kubelet[2617]: E0428 01:00:13.695162 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:00:13.961693 kubelet[2617]: E0428 01:00:13.917776 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:00:14.126740 kubelet[2617]: E0428 01:00:14.121021 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:14.279100 kubelet[2617]: E0428 01:00:14.244271 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:16.193090 kubelet[2617]: E0428 01:00:16.192536 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:00:19.716343 kubelet[2617]: I0428 01:00:19.707936 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:00:25.253800 kubelet[2617]: E0428 01:00:25.250996 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 28 01:00:25.749240 kubelet[2617]: E0428 01:00:25.748839 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:00:25.977529 kubelet[2617]: E0428 01:00:25.976859 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:00:26.228655 kubelet[2617]: E0428 01:00:26.226845 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:00:29.800208 kubelet[2617]: E0428 01:00:29.787220 2617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:00:30.516206 kubelet[2617]: E0428 01:00:30.513725 2617 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5f7aec3fb9d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,LastTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:00:42.045051 kubelet[2617]: E0428 01:00:41.908485 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:00:42.379606 kubelet[2617]: E0428 01:00:42.309138 2617 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:00:42.383351 kubelet[2617]: E0428 01:00:42.379782 2617 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:00:42.424853 kubelet[2617]: E0428 01:00:42.424259 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 28 01:00:42.458307 kubelet[2617]: E0428 01:00:42.425129 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:00:42.762252 kubelet[2617]: I0428 01:00:42.759606 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:00:44.901352 kubelet[2617]: E0428 01:00:44.896268 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:00:51.793091 kubelet[2617]: E0428 01:00:51.769912 2617 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18aa5f7aec3fb9d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,LastTimestamp:2026-04-28 00:59:44.505330128 +0000 UTC m=+14.871377823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:00:53.123807 kubelet[2617]: E0428 01:00:53.121567 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:00:53.271088 kubelet[2617]: E0428 01:00:53.261854 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:00:53.368264 kubelet[2617]: E0428 01:00:53.301006 2617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 28 01:00:53.942642 kubelet[2617]: E0428 01:00:53.942275 2617 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:00:58.664656 kubelet[2617]: E0428 01:00:58.664285 2617 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 28 01:01:01.419269 kubelet[2617]: I0428 01:01:01.414386 2617 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:01:01.636665 kubelet[2617]: I0428 01:01:01.636309 2617 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 01:01:01.636665 kubelet[2617]: E0428 01:01:01.636572 2617 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 28 01:01:02.796134 kubelet[2617]: E0428 01:01:02.795795 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:03.015900 kubelet[2617]: E0428 01:01:03.012043 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:03.184756 kubelet[2617]: E0428 01:01:03.174208 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:03.279065 kubelet[2617]: E0428 01:01:03.278257 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:03.301015 kubelet[2617]: E0428 01:01:03.289832 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:01:03.395331 kubelet[2617]: E0428 01:01:03.394379 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:03.506922 kubelet[2617]: E0428 01:01:03.504902 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:03.612239 kubelet[2617]: E0428 01:01:03.612001 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:03.717823 kubelet[2617]: E0428 01:01:03.715160 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:03.887303 kubelet[2617]: E0428 01:01:03.882731 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:03.986772 kubelet[2617]: E0428 01:01:03.985978 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:04.097088 kubelet[2617]: E0428 01:01:04.089269 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:04.206934 kubelet[2617]: E0428 01:01:04.193140 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:04.338268 kubelet[2617]: E0428 01:01:04.297229 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:04.449913 kubelet[2617]: E0428 01:01:04.447669 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:04.568681 kubelet[2617]: E0428 01:01:04.566049 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:04.674219 kubelet[2617]: E0428 01:01:04.670387 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:04.813346 kubelet[2617]: E0428 01:01:04.811277 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:04.927292 kubelet[2617]: E0428 01:01:04.920408 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:05.076389 kubelet[2617]: E0428 01:01:05.076176 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:05.279958 kubelet[2617]: E0428 01:01:05.278769 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:05.593742 kubelet[2617]: E0428 01:01:05.573158 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:05.676087 kubelet[2617]: E0428 01:01:05.674976 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:05.781327 kubelet[2617]: E0428 01:01:05.780406 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:05.897403 kubelet[2617]: E0428 01:01:05.893084 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:05.996997 kubelet[2617]: E0428 01:01:05.996334 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:06.136909 kubelet[2617]: E0428 01:01:06.128055 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:06.255731 kubelet[2617]: E0428 01:01:06.254669 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:06.371227 kubelet[2617]: E0428 01:01:06.368606 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:06.490061 kubelet[2617]: E0428 01:01:06.486742 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:06.705301 kubelet[2617]: E0428 01:01:06.694870 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:06.823046 kubelet[2617]: E0428 01:01:06.817764 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:06.921216 kubelet[2617]: E0428 01:01:06.920271 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:07.079286 kubelet[2617]: E0428 01:01:07.076597 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:07.178312 kubelet[2617]: E0428 01:01:07.178024 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:07.280414 kubelet[2617]: E0428 01:01:07.280115 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:07.394569 kubelet[2617]: E0428 01:01:07.386909 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:07.506846 kubelet[2617]: E0428 01:01:07.501145 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:07.613283 kubelet[2617]: E0428 01:01:07.610397 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:07.720383 kubelet[2617]: E0428 01:01:07.719874 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:07.890960 kubelet[2617]: E0428 01:01:07.890034 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:08.089351 kubelet[2617]: E0428 01:01:08.016319 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:08.265892 kubelet[2617]: E0428 01:01:08.265017 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:08.385499 kubelet[2617]: E0428 01:01:08.372073 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:08.610625 kubelet[2617]: E0428 01:01:08.610162 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:08.741218 kubelet[2617]: E0428 01:01:08.740913 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:08.892684 kubelet[2617]: E0428 01:01:08.859392 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:08.975267 kubelet[2617]: E0428 01:01:08.974108 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:09.096705 kubelet[2617]: E0428 01:01:09.091277 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:09.282122 kubelet[2617]: E0428 01:01:09.271936 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:09.410741 kubelet[2617]: E0428 01:01:09.397012 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:09.520651 kubelet[2617]: E0428 01:01:09.520021 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:09.795368 kubelet[2617]: E0428 01:01:09.781277 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:09.960851 kubelet[2617]: E0428 01:01:09.918610 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:10.096262 kubelet[2617]: E0428 01:01:10.092217 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:10.283126 kubelet[2617]: E0428 01:01:10.277054 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:10.398168 kubelet[2617]: E0428 01:01:10.391842 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:10.609362 kubelet[2617]: E0428 01:01:10.608888 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:10.775720 kubelet[2617]: E0428 01:01:10.774814 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:10.968817 kubelet[2617]: E0428 01:01:10.961156 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:11.136023 kubelet[2617]: E0428 01:01:11.096374 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:11.215651 kubelet[2617]: E0428 01:01:11.212371 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:11.474383 kubelet[2617]: E0428 01:01:11.470324 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:11.625277 kubelet[2617]: E0428 01:01:11.609404 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:11.801222 kubelet[2617]: E0428 01:01:11.795923 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:11.972812 kubelet[2617]: E0428 01:01:11.967832 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:12.113081 kubelet[2617]: E0428 01:01:12.107598 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:12.269881 kubelet[2617]: E0428 01:01:12.259611 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:12.512696 kubelet[2617]: E0428 01:01:12.398538 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:12.580628 kubelet[2617]: E0428 01:01:12.580312 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:12.761408 kubelet[2617]: E0428 01:01:12.753325 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:12.894125 kubelet[2617]: E0428 01:01:12.891492 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:13.067275 kubelet[2617]: E0428 01:01:13.065343 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:13.193794 kubelet[2617]: E0428 01:01:13.177307 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:13.193794 kubelet[2617]: E0428 01:01:13.183731 2617 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 28 01:01:13.318993 kubelet[2617]: E0428 01:01:13.318539 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:01:13.610853 kubelet[2617]: E0428 01:01:13.609948 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:13.819826 kubelet[2617]: E0428 01:01:13.807532 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:13.914769 kubelet[2617]: E0428 01:01:13.913076 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.018362 kubelet[2617]: E0428 01:01:14.017999 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.121883 kubelet[2617]: E0428 01:01:14.121613 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.243401 kubelet[2617]: E0428 01:01:14.242846 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.351294 kubelet[2617]: E0428 01:01:14.349848 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.460603 kubelet[2617]: E0428 01:01:14.460000 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.565882 kubelet[2617]: E0428 01:01:14.563155 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.670177 kubelet[2617]: E0428 01:01:14.669490 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.775783 kubelet[2617]: E0428 01:01:14.775273 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.879052 kubelet[2617]: E0428 01:01:14.876846 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:14.982325 kubelet[2617]: E0428 01:01:14.980028 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:15.097161 kubelet[2617]: E0428 01:01:15.085353 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:15.189367 kubelet[2617]: E0428 01:01:15.186319 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:15.290955 kubelet[2617]: E0428 01:01:15.289789 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:15.408411 kubelet[2617]: E0428 01:01:15.407033 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:15.550650 kubelet[2617]: E0428 01:01:15.544381 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:15.659016 kubelet[2617]: E0428 01:01:15.653349 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:15.759328 kubelet[2617]: E0428 01:01:15.758149 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:15.889858 kubelet[2617]: E0428 01:01:15.887368 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:16.003862 kubelet[2617]: E0428 01:01:15.999917 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:16.187316 kubelet[2617]: E0428 01:01:16.104805 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:16.291398 kubelet[2617]: E0428 01:01:16.290715 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:16.396737 kubelet[2617]: E0428 01:01:16.393285 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:16.510314 kubelet[2617]: E0428 01:01:16.508011 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:16.618848 kubelet[2617]: E0428 01:01:16.618310 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:16.747327 kubelet[2617]: E0428 01:01:16.723763 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:16.857329 kubelet[2617]: E0428 01:01:16.854649 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:16.959996 kubelet[2617]: E0428 01:01:16.959404 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:17.061960 kubelet[2617]: E0428 01:01:17.061361 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:17.190176 kubelet[2617]: E0428 01:01:17.182350 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:17.297378 kubelet[2617]: E0428 01:01:17.296341 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:17.399774 kubelet[2617]: E0428 01:01:17.399328 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:17.501004 kubelet[2617]: E0428 01:01:17.499911 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:17.609809 kubelet[2617]: E0428 01:01:17.605163 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:17.711705 kubelet[2617]: E0428 01:01:17.711078 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:17.825918 kubelet[2617]: E0428 01:01:17.823040 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:17.945100 kubelet[2617]: E0428 01:01:17.943834 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:18.051031 kubelet[2617]: E0428 01:01:18.050631 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:18.200076 kubelet[2617]: E0428 01:01:18.188169 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:18.318044 kubelet[2617]: E0428 01:01:18.316319 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:18.423089 kubelet[2617]: E0428 01:01:18.421269 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:18.584894 kubelet[2617]: E0428 01:01:18.582349 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:18.697186 kubelet[2617]: E0428 01:01:18.692313 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:18.812900 kubelet[2617]: E0428 01:01:18.812223 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:18.961139 kubelet[2617]: E0428 01:01:18.921130 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:19.090177 kubelet[2617]: E0428 01:01:19.089076 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:19.271622 kubelet[2617]: E0428 01:01:19.268376 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:19.389249 kubelet[2617]: E0428 01:01:19.378303 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:19.490970 kubelet[2617]: E0428 01:01:19.490075 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:19.602057 kubelet[2617]: E0428 01:01:19.592948 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:19.713070 kubelet[2617]: E0428 01:01:19.712664 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:19.839023 kubelet[2617]: E0428 01:01:19.838553 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:19.968179 kubelet[2617]: E0428 01:01:19.966376 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:20.069517 kubelet[2617]: E0428 01:01:20.069095 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:20.208568 kubelet[2617]: E0428 01:01:20.207796 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:20.311553 kubelet[2617]: E0428 01:01:20.310686 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:20.413099 kubelet[2617]: E0428 01:01:20.412700 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:20.541362 kubelet[2617]: E0428 01:01:20.522106 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:20.653555 kubelet[2617]: E0428 01:01:20.651402 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:20.760034 kubelet[2617]: E0428 01:01:20.759060 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:20.863351 kubelet[2617]: E0428 01:01:20.862206 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:21.012285 kubelet[2617]: E0428 01:01:21.011974 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:21.130371 kubelet[2617]: E0428 01:01:21.125943 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:21.228353 kubelet[2617]: E0428 01:01:21.227515 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:21.372552 kubelet[2617]: E0428 01:01:21.371404 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:21.479955 kubelet[2617]: E0428 01:01:21.478276 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:21.596093 kubelet[2617]: E0428 01:01:21.595654 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:21.701404 kubelet[2617]: E0428 01:01:21.698816 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:21.810712 kubelet[2617]: E0428 01:01:21.804349 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:21.866379 kubelet[2617]: E0428 01:01:21.866043 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:01:21.868014 kubelet[2617]: E0428 01:01:21.867404 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:21.913795 kubelet[2617]: E0428 01:01:21.912985 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:22.022732 kubelet[2617]: E0428 01:01:22.018301 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:22.161552 kubelet[2617]: E0428 01:01:22.161068 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:22.292064 kubelet[2617]: E0428 01:01:22.282829 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:22.395038 kubelet[2617]: E0428 01:01:22.391156 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:22.498628 kubelet[2617]: E0428 01:01:22.498235 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:22.617302 kubelet[2617]: E0428 01:01:22.615240 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:22.721962 kubelet[2617]: E0428 01:01:22.720084 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:22.885942 kubelet[2617]: E0428 01:01:22.879254 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:22.987308 kubelet[2617]: E0428 01:01:22.986950 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:23.103150 kubelet[2617]: E0428 01:01:23.102151 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:23.251117 kubelet[2617]: E0428 01:01:23.243187 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:23.365196 kubelet[2617]: E0428 01:01:23.364124 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:23.374507 kubelet[2617]: E0428 01:01:23.366356 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:01:23.492824 kubelet[2617]: E0428 01:01:23.491282 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:23.627228 kubelet[2617]: E0428 01:01:23.615209 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:23.761826 kubelet[2617]: E0428 01:01:23.723126 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:23.860649 kubelet[2617]: E0428 01:01:23.856973 2617 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 28 01:01:24.254193 kubelet[2617]: E0428 01:01:24.252265 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:24.474839 kubelet[2617]: E0428 01:01:24.467643 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:24.582931 kubelet[2617]: E0428 01:01:24.572207 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:24.684572 kubelet[2617]: E0428 01:01:24.684155 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:24.812117 kubelet[2617]: E0428 01:01:24.811695 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:24.987138 kubelet[2617]: E0428 01:01:24.986145 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:25.092380 kubelet[2617]: E0428 01:01:25.089070 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:25.212904 kubelet[2617]: E0428 01:01:25.208384 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:25.401035 kubelet[2617]: E0428 01:01:25.384849 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:25.536095 kubelet[2617]: E0428 01:01:25.532957 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:25.681203 kubelet[2617]: E0428 01:01:25.674111 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:25.845315 kubelet[2617]: E0428 01:01:25.816231 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:26.081702 kubelet[2617]: E0428 01:01:26.071092 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:26.197816 kubelet[2617]: E0428 01:01:26.173775 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:26.742174 kubelet[2617]: E0428 01:01:26.740029 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:26.898919 kubelet[2617]: E0428 01:01:26.861360 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:27.227386 kubelet[2617]: E0428 01:01:27.222937 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:27.371993 kubelet[2617]: E0428 01:01:27.369652 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:27.519537 kubelet[2617]: E0428 01:01:27.486943 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:27.591229 kubelet[2617]: E0428 01:01:27.590963 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:27.714068 kubelet[2617]: E0428 01:01:27.710338 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:27.893841 kubelet[2617]: E0428 01:01:27.870972 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:28.035905 kubelet[2617]: E0428 01:01:28.035681 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:28.367989 kubelet[2617]: E0428 01:01:28.366241 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:28.367989 kubelet[2617]: E0428 01:01:28.367037 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:01:28.408046 kubelet[2617]: E0428 01:01:28.367964 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:28.533503 kubelet[2617]: E0428 01:01:28.531522 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:28.732342 kubelet[2617]: E0428 01:01:28.725176 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:28.868578 kubelet[2617]: E0428 01:01:28.866335 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:29.036831 kubelet[2617]: E0428 01:01:29.034276 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:29.187669 kubelet[2617]: E0428 01:01:29.187126 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:29.229819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7-rootfs.mount: Deactivated successfully. Apr 28 01:01:29.254990 containerd[1586]: time="2026-04-28T01:01:29.253224637Z" level=info msg="shim disconnected" id=94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7 namespace=k8s.io Apr 28 01:01:29.263332 containerd[1586]: time="2026-04-28T01:01:29.260123314Z" level=warning msg="cleaning up after shim disconnected" id=94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7 namespace=k8s.io Apr 28 01:01:29.263332 containerd[1586]: time="2026-04-28T01:01:29.260619667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:01:29.305243 kubelet[2617]: E0428 01:01:29.297415 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:29.493370 kubelet[2617]: E0428 01:01:29.491715 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:29.621601 kubelet[2617]: E0428 01:01:29.606821 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:29.748211 kubelet[2617]: E0428 01:01:29.744207 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:29.850920 kubelet[2617]: E0428 01:01:29.850534 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:29.966943 kubelet[2617]: E0428 01:01:29.953172 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:29.996320 containerd[1586]: time="2026-04-28T01:01:29.993875748Z" level=warning msg="cleanup warnings time=\"2026-04-28T01:01:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 01:01:30.072606 kubelet[2617]: E0428 01:01:30.071298 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:30.192997 kubelet[2617]: E0428 01:01:30.190029 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:30.303514 kubelet[2617]: E0428 01:01:30.298536 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:30.401104 kubelet[2617]: E0428 01:01:30.400774 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:30.587945 kubelet[2617]: E0428 01:01:30.522878 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:30.703699 kubelet[2617]: E0428 01:01:30.702192 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:30.814411 kubelet[2617]: E0428 01:01:30.813975 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:31.002707 kubelet[2617]: E0428 01:01:30.916192 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:31.170169 kubelet[2617]: E0428 01:01:31.162399 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:31.314274 kubelet[2617]: E0428 01:01:31.292127 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:31.416772 kubelet[2617]: E0428 01:01:31.406517 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:31.618891 kubelet[2617]: E0428 01:01:31.603281 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:31.708311 kubelet[2617]: E0428 01:01:31.707493 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:31.820755 kubelet[2617]: E0428 01:01:31.819103 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:32.011197 kubelet[2617]: E0428 01:01:31.922411 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:32.117363 kubelet[2617]: E0428 01:01:32.117091 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:32.265695 kubelet[2617]: E0428 01:01:32.261977 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:32.302176 kubelet[2617]: E0428 01:01:32.284314 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:01:32.312006 kubelet[2617]: I0428 01:01:32.311797 2617 scope.go:117] "RemoveContainer" containerID="94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7" Apr 28 01:01:32.341280 kubelet[2617]: E0428 01:01:32.341015 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:32.396158 kubelet[2617]: E0428 01:01:32.393104 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:32.592913 kubelet[2617]: E0428 01:01:32.576841 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:32.714675 containerd[1586]: time="2026-04-28T01:01:32.710631508Z" level=info msg="CreateContainer within sandbox \"2ac400876d35f772f1fdc2675a5271f5dd8eb62cd0a69b024b6ed75bec901e74\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 28 01:01:32.762973 kubelet[2617]: E0428 01:01:32.722251 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:32.884382 kubelet[2617]: E0428 01:01:32.881630 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:33.019842 kubelet[2617]: E0428 01:01:33.019364 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:33.265279 kubelet[2617]: E0428 01:01:33.265034 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:33.506372 kubelet[2617]: E0428 01:01:33.490542 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:33.538706 kubelet[2617]: E0428 01:01:33.529498 2617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:01:33.690060 kubelet[2617]: E0428 01:01:33.689732 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:33.799290 kubelet[2617]: E0428 01:01:33.796166 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:33.920893 kubelet[2617]: E0428 01:01:33.916221 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:33.905569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856383711.mount: Deactivated successfully. Apr 28 01:01:33.958722 containerd[1586]: time="2026-04-28T01:01:33.958372390Z" level=info msg="CreateContainer within sandbox \"2ac400876d35f772f1fdc2675a5271f5dd8eb62cd0a69b024b6ed75bec901e74\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de\"" Apr 28 01:01:33.973747 containerd[1586]: time="2026-04-28T01:01:33.973414425Z" level=info msg="StartContainer for \"ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de\"" Apr 28 01:01:34.018917 kubelet[2617]: E0428 01:01:34.018326 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:34.174055 kubelet[2617]: E0428 01:01:34.167814 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:34.274521 kubelet[2617]: E0428 01:01:34.274236 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:34.413338 kubelet[2617]: E0428 01:01:34.413040 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:34.492313 systemd[1]: Reloading requested from client PID 2960 ('systemctl') (unit session-7.scope)... Apr 28 01:01:34.492546 systemd[1]: Reloading... Apr 28 01:01:34.524401 kubelet[2617]: E0428 01:01:34.523051 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:34.530086 kubelet[2617]: E0428 01:01:34.529823 2617 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 28 01:01:34.796025 kubelet[2617]: E0428 01:01:34.792873 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:34.929787 kubelet[2617]: E0428 01:01:34.926074 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:35.047177 zram_generator::config[3008]: No configuration found. Apr 28 01:01:35.053981 kubelet[2617]: E0428 01:01:35.053812 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:35.061123 containerd[1586]: time="2026-04-28T01:01:35.054925145Z" level=info msg="StartContainer for \"ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de\" returns successfully" Apr 28 01:01:35.174726 kubelet[2617]: E0428 01:01:35.169615 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:35.275853 kubelet[2617]: E0428 01:01:35.273037 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:35.409714 kubelet[2617]: E0428 01:01:35.409085 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:35.527954 kubelet[2617]: E0428 01:01:35.521348 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:35.548936 kubelet[2617]: E0428 01:01:35.548296 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:01:35.551537 kubelet[2617]: E0428 01:01:35.551404 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:35.686349 kubelet[2617]: E0428 01:01:35.665785 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:35.686349 kubelet[2617]: E0428 01:01:35.684173 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:01:35.700628 kubelet[2617]: E0428 01:01:35.700546 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:35.767506 kubelet[2617]: E0428 01:01:35.767148 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:35.886116 kubelet[2617]: E0428 01:01:35.882197 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:35.990582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:01:35.992067 kubelet[2617]: E0428 01:01:35.990610 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:36.097044 kubelet[2617]: E0428 01:01:36.096783 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:36.131920 kubelet[2617]: E0428 01:01:36.131819 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:01:36.132696 kubelet[2617]: E0428 01:01:36.132643 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:36.133326 kubelet[2617]: E0428 01:01:36.133254 2617 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:01:36.143187 kubelet[2617]: E0428 01:01:36.140687 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:36.203935 kubelet[2617]: E0428 01:01:36.203546 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:36.308171 kubelet[2617]: E0428 01:01:36.305769 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:36.408992 kubelet[2617]: E0428 01:01:36.407188 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:36.511617 kubelet[2617]: E0428 01:01:36.510587 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:36.614581 kubelet[2617]: E0428 01:01:36.613871 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:36.615916 systemd[1]: Reloading finished in 2119 ms. Apr 28 01:01:36.720793 kubelet[2617]: E0428 01:01:36.720559 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:36.822867 kubelet[2617]: E0428 01:01:36.822068 2617 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:01:36.848837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:01:36.911816 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 01:01:36.913239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:01:36.966303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:01:40.106795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:01:40.218907 (kubelet)[3069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 01:01:42.263958 kubelet[3069]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:01:42.263958 kubelet[3069]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 01:01:42.263958 kubelet[3069]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:01:42.275766 kubelet[3069]: I0428 01:01:42.270252 3069 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 01:01:42.422883 kubelet[3069]: I0428 01:01:42.419300 3069 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 28 01:01:42.430299 kubelet[3069]: I0428 01:01:42.423041 3069 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 01:01:42.433654 kubelet[3069]: I0428 01:01:42.433540 3069 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 01:01:42.479250 kubelet[3069]: I0428 01:01:42.478967 3069 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 01:01:42.523540 kubelet[3069]: I0428 01:01:42.521238 3069 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 01:01:42.653211 kubelet[3069]: E0428 01:01:42.650999 3069 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 01:01:42.662038 kubelet[3069]: I0428 01:01:42.653334 3069 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 28 01:01:42.814537 kubelet[3069]: I0428 01:01:42.813671 3069 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 28 01:01:42.825750 kubelet[3069]: I0428 01:01:42.823344 3069 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 01:01:42.828259 kubelet[3069]: I0428 01:01:42.824936 3069 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 28 01:01:42.828259 kubelet[3069]: I0428 01:01:42.827569 3069 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 01:01:42.828259 kubelet[3069]: I0428 01:01:42.828106 3069 container_manager_linux.go:303] "Creating device plugin manager" Apr 28 01:01:42.829461 kubelet[3069]: I0428 01:01:42.829394 3069 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:01:42.830638 kubelet[3069]: I0428 01:01:42.830569 3069 kubelet.go:480] "Attempting to sync node with API server" Apr 28 01:01:42.830832 kubelet[3069]: I0428 01:01:42.830712 3069 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 01:01:42.830987 kubelet[3069]: I0428 01:01:42.830925 3069 kubelet.go:386] "Adding apiserver pod source" Apr 28 01:01:42.831227 kubelet[3069]: I0428 01:01:42.831153 3069 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 01:01:42.901128 kubelet[3069]: I0428 01:01:42.900813 3069 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 01:01:42.910656 kubelet[3069]: I0428 01:01:42.909264 3069 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 01:01:43.045625 kubelet[3069]: I0428 01:01:43.041235 3069 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 28 01:01:43.045625 kubelet[3069]: I0428 01:01:43.041279 3069 server.go:1289] "Started kubelet" Apr 28 01:01:43.085046 kubelet[3069]: I0428 01:01:43.081169 3069 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 01:01:43.085046 kubelet[3069]: I0428 01:01:43.053363 3069 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 01:01:43.127090 kubelet[3069]: I0428 01:01:43.123316 3069 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 01:01:43.196840 kubelet[3069]: I0428 01:01:43.190993 3069 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 01:01:43.230750 kubelet[3069]: I0428 01:01:43.230520 3069 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 01:01:43.266630 kubelet[3069]: I0428 01:01:43.266311 3069 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 28 01:01:43.277289 kubelet[3069]: I0428 01:01:43.274345 3069 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 28 01:01:43.288670 kubelet[3069]: I0428 01:01:43.282625 3069 reconciler.go:26] "Reconciler: start to sync state" Apr 28 01:01:43.328114 kubelet[3069]: I0428 01:01:43.327854 3069 factory.go:223] Registration of the systemd container factory successfully Apr 28 01:01:43.328114 kubelet[3069]: I0428 01:01:43.328052 3069 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 01:01:43.351802 kubelet[3069]: I0428 01:01:43.351055 3069 server.go:317] "Adding debug handlers to kubelet server" Apr 28 01:01:43.355789 kubelet[3069]: I0428 01:01:43.355287 3069 factory.go:223] Registration of the containerd container factory successfully Apr 28 01:01:43.363418 kubelet[3069]: E0428 01:01:43.356049 3069 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 01:01:43.567948 kubelet[3069]: I0428 01:01:43.567389 3069 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 28 01:01:43.699393 kubelet[3069]: I0428 01:01:43.689142 3069 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 28 01:01:43.709002 kubelet[3069]: I0428 01:01:43.708801 3069 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 28 01:01:43.758527 kubelet[3069]: I0428 01:01:43.758109 3069 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 01:01:43.758527 kubelet[3069]: I0428 01:01:43.758294 3069 kubelet.go:2436] "Starting kubelet main sync loop" Apr 28 01:01:43.808670 kubelet[3069]: E0428 01:01:43.803403 3069 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 01:01:43.845642 kubelet[3069]: I0428 01:01:43.845308 3069 apiserver.go:52] "Watching apiserver" Apr 28 01:01:44.006315 kubelet[3069]: E0428 01:01:43.995251 3069 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 01:01:44.208705 kubelet[3069]: E0428 01:01:44.208006 3069 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:01:44.610343 kubelet[3069]: E0428 01:01:44.609279 3069 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 28 01:01:44.680660 kubelet[3069]: I0428 01:01:44.678742 3069 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 01:01:44.680660 kubelet[3069]: I0428 01:01:44.678882 3069 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 01:01:44.680660 kubelet[3069]: I0428 01:01:44.679050 3069 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:01:44.680660 kubelet[3069]: I0428 01:01:44.679864 3069 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 01:01:44.680660 kubelet[3069]: I0428 01:01:44.679881 3069 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 01:01:44.680660 kubelet[3069]: I0428 01:01:44.680016 3069 policy_none.go:49] "None policy: Start" Apr 28 01:01:44.680660 kubelet[3069]: I0428 01:01:44.680219 3069 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 28 01:01:44.680660 kubelet[3069]: I0428 01:01:44.680310 3069 state_mem.go:35] "Initializing new in-memory state store" Apr 28 01:01:44.682141 kubelet[3069]: I0428 01:01:44.682127 3069 state_mem.go:75] "Updated machine memory state" Apr 28 01:01:44.688369 kubelet[3069]: E0428 01:01:44.688220 3069 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 01:01:44.689995 kubelet[3069]: I0428 01:01:44.689979 3069 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 01:01:44.690262 kubelet[3069]: I0428 01:01:44.690161 3069 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 01:01:44.691077 kubelet[3069]: I0428 01:01:44.691059 3069 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 01:01:44.717307 kubelet[3069]: E0428 01:01:44.717069 3069 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 01:01:44.901914 kubelet[3069]: I0428 01:01:44.898152 3069 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:01:45.058321 kubelet[3069]: I0428 01:01:45.054573 3069 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 28 01:01:45.063715 kubelet[3069]: I0428 01:01:45.063509 3069 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 01:01:45.444154 kubelet[3069]: I0428 01:01:45.435602 3069 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 01:01:45.444154 kubelet[3069]: I0428 01:01:45.435584 3069 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 01:01:45.452895 kubelet[3069]: I0428 01:01:45.449286 3069 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:01:45.508083 kubelet[3069]: I0428 01:01:45.506856 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:01:45.508083 kubelet[3069]: I0428 01:01:45.507098 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:01:45.512771 kubelet[3069]: I0428 01:01:45.512500 3069 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 28 01:01:45.611180 kubelet[3069]: I0428 01:01:45.609798 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:01:45.630542 kubelet[3069]: I0428 01:01:45.625405 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04ce0b2223f493e56fe4c887c063836d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"04ce0b2223f493e56fe4c887c063836d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:01:45.630542 kubelet[3069]: I0428 01:01:45.626060 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 28 01:01:45.630542 kubelet[3069]: I0428 01:01:45.626789 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04ce0b2223f493e56fe4c887c063836d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"04ce0b2223f493e56fe4c887c063836d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:01:45.630542 kubelet[3069]: I0428 01:01:45.626809 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04ce0b2223f493e56fe4c887c063836d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"04ce0b2223f493e56fe4c887c063836d\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:01:45.636962 kubelet[3069]: I0428 01:01:45.635641 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:01:45.636962 kubelet[3069]: I0428 01:01:45.635907 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:01:45.821542 kubelet[3069]: E0428 01:01:45.819984 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:45.837401 kubelet[3069]: E0428 01:01:45.837056 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:45.850199 kubelet[3069]: E0428 01:01:45.849542 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:46.140855 kubelet[3069]: I0428 01:01:46.135794 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.135712029 podStartE2EDuration="1.135712029s" podCreationTimestamp="2026-04-28 01:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:01:46.135615515 +0000 UTC m=+5.858755254" watchObservedRunningTime="2026-04-28 01:01:46.135712029 +0000 UTC m=+5.858851758" Apr 28 01:01:46.162292 kubelet[3069]: E0428 01:01:46.161960 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:46.163628 kubelet[3069]: E0428 01:01:46.163005 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:46.163628 kubelet[3069]: E0428 01:01:46.163176 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:46.210950 kubelet[3069]: I0428 01:01:46.210617 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.21056827 podStartE2EDuration="1.21056827s" podCreationTimestamp="2026-04-28 01:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:01:46.210218422 +0000 UTC m=+5.933358165" watchObservedRunningTime="2026-04-28 01:01:46.21056827 +0000 UTC m=+5.933708035" Apr 28 01:01:46.455592 kubelet[3069]: I0428 01:01:46.450638 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.450597071 podStartE2EDuration="1.450597071s" podCreationTimestamp="2026-04-28 01:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:01:46.279148948 +0000 UTC m=+6.002288684" watchObservedRunningTime="2026-04-28 01:01:46.450597071 +0000 UTC m=+6.173736809" Apr 28 01:01:47.328877 kubelet[3069]: E0428 01:01:47.328246 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:47.328877 kubelet[3069]: E0428 01:01:47.328943 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:47.570140 sudo[1759]: pam_unix(sudo:session): session closed for user root Apr 28 01:01:47.804694 sshd[1748]: pam_unix(sshd:session): session closed for user core Apr 28 01:01:48.036018 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:45664.service: Deactivated successfully. Apr 28 01:01:48.179154 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 01:01:48.398889 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Apr 28 01:01:48.812084 systemd-logind[1560]: Removed session 7. Apr 28 01:01:49.258760 kubelet[3069]: E0428 01:01:49.257207 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.463s" Apr 28 01:01:49.390341 kubelet[3069]: E0428 01:01:49.257255 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:49.457945 kubelet[3069]: E0428 01:01:49.457350 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:52.609215 kubelet[3069]: E0428 01:01:52.607827 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:01:53.493638 kubelet[3069]: E0428 01:01:53.485093 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:10.078112 kubelet[3069]: I0428 01:02:09.994746 3069 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 01:02:10.353887 containerd[1586]: time="2026-04-28T01:02:10.352736149Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 01:02:10.362165 kubelet[3069]: I0428 01:02:10.361955 3069 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 01:02:12.155985 kubelet[3069]: I0428 01:02:12.151408 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/08d48d18-f6e1-4338-a27e-1ff56efc86d8-kube-proxy\") pod \"kube-proxy-mdsxp\" (UID: \"08d48d18-f6e1-4338-a27e-1ff56efc86d8\") " pod="kube-system/kube-proxy-mdsxp" Apr 28 01:02:12.361915 kubelet[3069]: I0428 01:02:12.318320 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/3bfbbadb-e07b-40d7-be3e-20caa0d83c5f-cni-plugin\") pod \"kube-flannel-ds-svzjr\" (UID: \"3bfbbadb-e07b-40d7-be3e-20caa0d83c5f\") " pod="kube-flannel/kube-flannel-ds-svzjr" Apr 28 01:02:12.793587 kubelet[3069]: I0428 01:02:12.793069 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bfbbadb-e07b-40d7-be3e-20caa0d83c5f-xtables-lock\") pod \"kube-flannel-ds-svzjr\" (UID: \"3bfbbadb-e07b-40d7-be3e-20caa0d83c5f\") " pod="kube-flannel/kube-flannel-ds-svzjr" Apr 28 01:02:12.995749 kubelet[3069]: I0428 01:02:12.987501 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8j74\" (UniqueName: \"kubernetes.io/projected/3bfbbadb-e07b-40d7-be3e-20caa0d83c5f-kube-api-access-g8j74\") pod \"kube-flannel-ds-svzjr\" (UID: \"3bfbbadb-e07b-40d7-be3e-20caa0d83c5f\") " pod="kube-flannel/kube-flannel-ds-svzjr" Apr 28 01:02:13.203977 kubelet[3069]: I0428 01:02:13.135155 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3bfbbadb-e07b-40d7-be3e-20caa0d83c5f-run\") pod \"kube-flannel-ds-svzjr\" (UID: \"3bfbbadb-e07b-40d7-be3e-20caa0d83c5f\") " pod="kube-flannel/kube-flannel-ds-svzjr" Apr 28 01:02:13.390205 kubelet[3069]: I0428 01:02:13.379055 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08d48d18-f6e1-4338-a27e-1ff56efc86d8-lib-modules\") pod \"kube-proxy-mdsxp\" (UID: \"08d48d18-f6e1-4338-a27e-1ff56efc86d8\") " pod="kube-system/kube-proxy-mdsxp" Apr 28 01:02:13.689968 kubelet[3069]: I0428 01:02:13.684980 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08d48d18-f6e1-4338-a27e-1ff56efc86d8-xtables-lock\") pod \"kube-proxy-mdsxp\" (UID: \"08d48d18-f6e1-4338-a27e-1ff56efc86d8\") " pod="kube-system/kube-proxy-mdsxp" Apr 28 01:02:13.809974 kubelet[3069]: I0428 01:02:13.802362 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/3bfbbadb-e07b-40d7-be3e-20caa0d83c5f-cni\") pod \"kube-flannel-ds-svzjr\" (UID: \"3bfbbadb-e07b-40d7-be3e-20caa0d83c5f\") " pod="kube-flannel/kube-flannel-ds-svzjr" Apr 28 01:02:13.878087 kubelet[3069]: I0428 01:02:13.875590 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/3bfbbadb-e07b-40d7-be3e-20caa0d83c5f-flannel-cfg\") pod \"kube-flannel-ds-svzjr\" (UID: \"3bfbbadb-e07b-40d7-be3e-20caa0d83c5f\") " pod="kube-flannel/kube-flannel-ds-svzjr" Apr 28 01:02:13.970178 kubelet[3069]: I0428 01:02:13.969792 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsjxn\" (UniqueName: \"kubernetes.io/projected/08d48d18-f6e1-4338-a27e-1ff56efc86d8-kube-api-access-zsjxn\") pod \"kube-proxy-mdsxp\" (UID: \"08d48d18-f6e1-4338-a27e-1ff56efc86d8\") " pod="kube-system/kube-proxy-mdsxp" Apr 28 01:02:14.013350 kubelet[3069]: E0428 01:02:13.987050 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.217s" Apr 28 01:02:14.652105 kubelet[3069]: E0428 01:02:14.651777 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:14.682260 containerd[1586]: time="2026-04-28T01:02:14.680497053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdsxp,Uid:08d48d18-f6e1-4338-a27e-1ff56efc86d8,Namespace:kube-system,Attempt:0,}" Apr 28 01:02:14.783006 kubelet[3069]: E0428 01:02:14.782155 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:14.876975 containerd[1586]: time="2026-04-28T01:02:14.874829934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-svzjr,Uid:3bfbbadb-e07b-40d7-be3e-20caa0d83c5f,Namespace:kube-flannel,Attempt:0,}" Apr 28 01:02:15.762880 containerd[1586]: time="2026-04-28T01:02:15.668372408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:02:15.762880 containerd[1586]: time="2026-04-28T01:02:15.668630414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:02:15.762880 containerd[1586]: time="2026-04-28T01:02:15.668680939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:02:15.762880 containerd[1586]: time="2026-04-28T01:02:15.668890468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:02:16.160703 systemd[1]: run-containerd-runc-k8s.io-2dff39e17277b36500043dd38774f340016a803be7226458f1699d6d9bbdb17f-runc.Ai2CmX.mount: Deactivated successfully. Apr 28 01:02:16.214714 containerd[1586]: time="2026-04-28T01:02:16.211517750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:02:16.214714 containerd[1586]: time="2026-04-28T01:02:16.211870754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:02:16.214714 containerd[1586]: time="2026-04-28T01:02:16.211886929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:02:16.219981 containerd[1586]: time="2026-04-28T01:02:16.215996049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:02:16.628937 containerd[1586]: time="2026-04-28T01:02:16.621640826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-svzjr,Uid:3bfbbadb-e07b-40d7-be3e-20caa0d83c5f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2dff39e17277b36500043dd38774f340016a803be7226458f1699d6d9bbdb17f\"" Apr 28 01:02:16.649121 kubelet[3069]: E0428 01:02:16.649075 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:16.652873 containerd[1586]: time="2026-04-28T01:02:16.652844110Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Apr 28 01:02:18.165955 containerd[1586]: time="2026-04-28T01:02:18.163330585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdsxp,Uid:08d48d18-f6e1-4338-a27e-1ff56efc86d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"854d903f85bb9bc4362d6b815f9436b35b142660771b6319640cbcc141610d63\"" Apr 28 01:02:18.380077 kubelet[3069]: E0428 01:02:18.357227 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:18.807595 containerd[1586]: time="2026-04-28T01:02:18.806753896Z" level=info msg="CreateContainer within sandbox \"854d903f85bb9bc4362d6b815f9436b35b142660771b6319640cbcc141610d63\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 01:02:18.999909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721542790.mount: Deactivated successfully. Apr 28 01:02:19.030834 containerd[1586]: time="2026-04-28T01:02:19.030363031Z" level=info msg="CreateContainer within sandbox \"854d903f85bb9bc4362d6b815f9436b35b142660771b6319640cbcc141610d63\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0fc351b6e5c3f6c51a95a275ea55a3211e3436113ed6408d31315abd973d665a\"" Apr 28 01:02:19.121945 containerd[1586]: time="2026-04-28T01:02:19.119141463Z" level=info msg="StartContainer for \"0fc351b6e5c3f6c51a95a275ea55a3211e3436113ed6408d31315abd973d665a\"" Apr 28 01:02:21.385356 containerd[1586]: time="2026-04-28T01:02:21.384940049Z" level=info msg="StartContainer for \"0fc351b6e5c3f6c51a95a275ea55a3211e3436113ed6408d31315abd973d665a\" returns successfully" Apr 28 01:02:22.114344 kubelet[3069]: E0428 01:02:22.105192 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:22.540549 kubelet[3069]: I0428 01:02:22.539651 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mdsxp" podStartSLOduration=12.539609462 podStartE2EDuration="12.539609462s" podCreationTimestamp="2026-04-28 01:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:02:22.539486132 +0000 UTC m=+42.262625872" watchObservedRunningTime="2026-04-28 01:02:22.539609462 +0000 UTC m=+42.262749201" Apr 28 01:02:23.129091 kubelet[3069]: E0428 01:02:23.128918 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:23.196316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818301723.mount: Deactivated successfully. Apr 28 01:02:24.226656 containerd[1586]: time="2026-04-28T01:02:24.226258939Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:02:24.235104 containerd[1586]: time="2026-04-28T01:02:24.233041113Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Apr 28 01:02:24.246783 containerd[1586]: time="2026-04-28T01:02:24.246710454Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:02:24.257559 containerd[1586]: time="2026-04-28T01:02:24.256329413Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:02:24.262724 containerd[1586]: time="2026-04-28T01:02:24.262531510Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 7.609294667s" Apr 28 01:02:24.262724 containerd[1586]: time="2026-04-28T01:02:24.262588159Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Apr 28 01:02:24.551651 containerd[1586]: time="2026-04-28T01:02:24.544601812Z" level=info msg="CreateContainer within sandbox \"2dff39e17277b36500043dd38774f340016a803be7226458f1699d6d9bbdb17f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 28 01:02:24.919187 containerd[1586]: time="2026-04-28T01:02:24.915776289Z" level=info msg="CreateContainer within sandbox \"2dff39e17277b36500043dd38774f340016a803be7226458f1699d6d9bbdb17f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"96db3434ef5c3cc63283db600439fc166bd68056b55c80a71a3884579955eef2\"" Apr 28 01:02:24.963738 containerd[1586]: time="2026-04-28T01:02:24.959056217Z" level=info msg="StartContainer for \"96db3434ef5c3cc63283db600439fc166bd68056b55c80a71a3884579955eef2\"" Apr 28 01:02:25.825414 containerd[1586]: time="2026-04-28T01:02:25.823277497Z" level=info msg="StartContainer for \"96db3434ef5c3cc63283db600439fc166bd68056b55c80a71a3884579955eef2\" returns successfully" Apr 28 01:02:26.277641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96db3434ef5c3cc63283db600439fc166bd68056b55c80a71a3884579955eef2-rootfs.mount: Deactivated successfully. Apr 28 01:02:26.281093 containerd[1586]: time="2026-04-28T01:02:26.280851031Z" level=info msg="shim disconnected" id=96db3434ef5c3cc63283db600439fc166bd68056b55c80a71a3884579955eef2 namespace=k8s.io Apr 28 01:02:26.281093 containerd[1586]: time="2026-04-28T01:02:26.280988346Z" level=warning msg="cleaning up after shim disconnected" id=96db3434ef5c3cc63283db600439fc166bd68056b55c80a71a3884579955eef2 namespace=k8s.io Apr 28 01:02:26.281093 containerd[1586]: time="2026-04-28T01:02:26.281002012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:02:26.423610 containerd[1586]: time="2026-04-28T01:02:26.422067087Z" level=warning msg="cleanup warnings time=\"2026-04-28T01:02:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 01:02:26.449612 kubelet[3069]: E0428 01:02:26.448724 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:26.461902 containerd[1586]: time="2026-04-28T01:02:26.461367662Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Apr 28 01:02:36.290935 containerd[1586]: time="2026-04-28T01:02:36.290813785Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:02:36.292403 containerd[1586]: time="2026-04-28T01:02:36.291501161Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Apr 28 01:02:36.292975 containerd[1586]: time="2026-04-28T01:02:36.292905332Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:02:36.300586 containerd[1586]: time="2026-04-28T01:02:36.300120111Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:02:36.302459 containerd[1586]: time="2026-04-28T01:02:36.302343810Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 9.840922467s" Apr 28 01:02:36.302766 containerd[1586]: time="2026-04-28T01:02:36.302413436Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Apr 28 01:02:36.322206 containerd[1586]: time="2026-04-28T01:02:36.321915683Z" level=info msg="CreateContainer within sandbox \"2dff39e17277b36500043dd38774f340016a803be7226458f1699d6d9bbdb17f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 28 01:02:36.376674 containerd[1586]: time="2026-04-28T01:02:36.376495257Z" level=info msg="CreateContainer within sandbox \"2dff39e17277b36500043dd38774f340016a803be7226458f1699d6d9bbdb17f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2a859a329dd39a0220c7031e6a151ed0c6edfdf5149e94e1809fe8254afcb973\"" Apr 28 01:02:36.380131 containerd[1586]: time="2026-04-28T01:02:36.380053844Z" level=info msg="StartContainer for \"2a859a329dd39a0220c7031e6a151ed0c6edfdf5149e94e1809fe8254afcb973\"" Apr 28 01:02:36.684129 containerd[1586]: time="2026-04-28T01:02:36.682841686Z" level=info msg="StartContainer for \"2a859a329dd39a0220c7031e6a151ed0c6edfdf5149e94e1809fe8254afcb973\" returns successfully" Apr 28 01:02:36.767544 kubelet[3069]: I0428 01:02:36.766201 3069 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 28 01:02:36.845616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a859a329dd39a0220c7031e6a151ed0c6edfdf5149e94e1809fe8254afcb973-rootfs.mount: Deactivated successfully. Apr 28 01:02:36.888087 containerd[1586]: time="2026-04-28T01:02:36.886928717Z" level=info msg="shim disconnected" id=2a859a329dd39a0220c7031e6a151ed0c6edfdf5149e94e1809fe8254afcb973 namespace=k8s.io Apr 28 01:02:36.888087 containerd[1586]: time="2026-04-28T01:02:36.887415755Z" level=warning msg="cleaning up after shim disconnected" id=2a859a329dd39a0220c7031e6a151ed0c6edfdf5149e94e1809fe8254afcb973 namespace=k8s.io Apr 28 01:02:36.888087 containerd[1586]: time="2026-04-28T01:02:36.887543854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:02:37.164021 kubelet[3069]: E0428 01:02:37.163706 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:37.206819 kubelet[3069]: I0428 01:02:37.199703 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5345e0a4-93aa-402f-8137-d129fbd0e8a0-config-volume\") pod \"coredns-674b8bbfcf-682j5\" (UID: \"5345e0a4-93aa-402f-8137-d129fbd0e8a0\") " pod="kube-system/coredns-674b8bbfcf-682j5" Apr 28 01:02:37.215737 kubelet[3069]: I0428 01:02:37.214164 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74cq\" (UniqueName: \"kubernetes.io/projected/6fe59fc6-df4c-450d-a94e-46bc8c8fda1b-kube-api-access-k74cq\") pod \"coredns-674b8bbfcf-cp225\" (UID: \"6fe59fc6-df4c-450d-a94e-46bc8c8fda1b\") " pod="kube-system/coredns-674b8bbfcf-cp225" Apr 28 01:02:37.218792 kubelet[3069]: I0428 01:02:37.216080 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe59fc6-df4c-450d-a94e-46bc8c8fda1b-config-volume\") pod \"coredns-674b8bbfcf-cp225\" (UID: \"6fe59fc6-df4c-450d-a94e-46bc8c8fda1b\") " pod="kube-system/coredns-674b8bbfcf-cp225" Apr 28 01:02:37.218792 kubelet[3069]: I0428 01:02:37.216149 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frgdc\" (UniqueName: \"kubernetes.io/projected/5345e0a4-93aa-402f-8137-d129fbd0e8a0-kube-api-access-frgdc\") pod \"coredns-674b8bbfcf-682j5\" (UID: \"5345e0a4-93aa-402f-8137-d129fbd0e8a0\") " pod="kube-system/coredns-674b8bbfcf-682j5" Apr 28 01:02:37.253114 containerd[1586]: time="2026-04-28T01:02:37.252270550Z" level=info msg="CreateContainer within sandbox \"2dff39e17277b36500043dd38774f340016a803be7226458f1699d6d9bbdb17f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 28 01:02:37.347526 containerd[1586]: time="2026-04-28T01:02:37.346838552Z" level=info msg="CreateContainer within sandbox \"2dff39e17277b36500043dd38774f340016a803be7226458f1699d6d9bbdb17f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"b06625f5cf6268a3f592d570763ca9ba3f049a939c74a0cae6c227a7f5fe60bf\"" Apr 28 01:02:37.371210 containerd[1586]: time="2026-04-28T01:02:37.369942326Z" level=info msg="StartContainer for \"b06625f5cf6268a3f592d570763ca9ba3f049a939c74a0cae6c227a7f5fe60bf\"" Apr 28 01:02:37.434156 kubelet[3069]: E0428 01:02:37.423398 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:37.488112 containerd[1586]: time="2026-04-28T01:02:37.487810577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-682j5,Uid:5345e0a4-93aa-402f-8137-d129fbd0e8a0,Namespace:kube-system,Attempt:0,}" Apr 28 01:02:37.619030 kubelet[3069]: E0428 01:02:37.618861 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:37.660770 containerd[1586]: time="2026-04-28T01:02:37.658752805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cp225,Uid:6fe59fc6-df4c-450d-a94e-46bc8c8fda1b,Namespace:kube-system,Attempt:0,}" Apr 28 01:02:37.832010 containerd[1586]: time="2026-04-28T01:02:37.831804687Z" level=info msg="StartContainer for \"b06625f5cf6268a3f592d570763ca9ba3f049a939c74a0cae6c227a7f5fe60bf\" returns successfully" Apr 28 01:02:37.842386 containerd[1586]: time="2026-04-28T01:02:37.842122370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-682j5,Uid:5345e0a4-93aa-402f-8137-d129fbd0e8a0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8a8fafb0e335ab1481ae33a7e4c5791989967b55fb027a7075ab35bf56edc9f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 28 01:02:37.845731 kubelet[3069]: E0428 01:02:37.843601 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a8fafb0e335ab1481ae33a7e4c5791989967b55fb027a7075ab35bf56edc9f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 28 01:02:37.845731 kubelet[3069]: E0428 01:02:37.843667 3069 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a8fafb0e335ab1481ae33a7e4c5791989967b55fb027a7075ab35bf56edc9f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-682j5" Apr 28 01:02:37.845731 kubelet[3069]: E0428 01:02:37.843754 3069 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a8fafb0e335ab1481ae33a7e4c5791989967b55fb027a7075ab35bf56edc9f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-682j5" Apr 28 01:02:37.845731 kubelet[3069]: E0428 01:02:37.843867 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-682j5_kube-system(5345e0a4-93aa-402f-8137-d129fbd0e8a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-682j5_kube-system(5345e0a4-93aa-402f-8137-d129fbd0e8a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8a8fafb0e335ab1481ae33a7e4c5791989967b55fb027a7075ab35bf56edc9f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-682j5" podUID="5345e0a4-93aa-402f-8137-d129fbd0e8a0" Apr 28 01:02:37.965609 containerd[1586]: time="2026-04-28T01:02:37.964970387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cp225,Uid:6fe59fc6-df4c-450d-a94e-46bc8c8fda1b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6060d78590b187996004c348461e816d0acf6594c5302bb5d8cbbfe32f2fada\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 28 01:02:37.967539 kubelet[3069]: E0428 01:02:37.966971 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6060d78590b187996004c348461e816d0acf6594c5302bb5d8cbbfe32f2fada\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 28 01:02:37.967895 kubelet[3069]: E0428 01:02:37.967730 3069 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6060d78590b187996004c348461e816d0acf6594c5302bb5d8cbbfe32f2fada\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-cp225" Apr 28 01:02:37.967895 kubelet[3069]: E0428 01:02:37.967755 3069 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6060d78590b187996004c348461e816d0acf6594c5302bb5d8cbbfe32f2fada\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-cp225" Apr 28 01:02:37.967945 kubelet[3069]: E0428 01:02:37.967854 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cp225_kube-system(6fe59fc6-df4c-450d-a94e-46bc8c8fda1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cp225_kube-system(6fe59fc6-df4c-450d-a94e-46bc8c8fda1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6060d78590b187996004c348461e816d0acf6594c5302bb5d8cbbfe32f2fada\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-cp225" podUID="6fe59fc6-df4c-450d-a94e-46bc8c8fda1b" Apr 28 01:02:38.163267 kubelet[3069]: E0428 01:02:38.162255 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:38.186318 kubelet[3069]: I0428 01:02:38.186066 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-svzjr" podStartSLOduration=7.533561336 podStartE2EDuration="27.186022841s" podCreationTimestamp="2026-04-28 01:02:11 +0000 UTC" firstStartedPulling="2026-04-28 01:02:16.651906579 +0000 UTC m=+36.375046307" lastFinishedPulling="2026-04-28 01:02:36.304368083 +0000 UTC m=+56.027507812" observedRunningTime="2026-04-28 01:02:38.185985447 +0000 UTC m=+57.909125183" watchObservedRunningTime="2026-04-28 01:02:38.186022841 +0000 UTC m=+57.909162576" Apr 28 01:02:38.362854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8a8fafb0e335ab1481ae33a7e4c5791989967b55fb027a7075ab35bf56edc9f-shm.mount: Deactivated successfully. Apr 28 01:02:39.216825 kubelet[3069]: E0428 01:02:39.212261 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:39.214764 systemd-networkd[1250]: flannel.1: Link UP Apr 28 01:02:39.214767 systemd-networkd[1250]: flannel.1: Gained carrier Apr 28 01:02:40.305867 systemd-networkd[1250]: flannel.1: Gained IPv6LL Apr 28 01:02:48.897835 kubelet[3069]: E0428 01:02:48.897390 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:49.087707 containerd[1586]: time="2026-04-28T01:02:49.085343453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cp225,Uid:6fe59fc6-df4c-450d-a94e-46bc8c8fda1b,Namespace:kube-system,Attempt:0,}" Apr 28 01:02:50.506418 systemd-networkd[1250]: cni0: Link UP Apr 28 01:02:50.514941 systemd-networkd[1250]: cni0: Gained carrier Apr 28 01:02:50.541751 systemd-networkd[1250]: cni0: Lost carrier Apr 28 01:02:50.673785 kernel: cni0: port 1(veth2fa99606) entered blocking state Apr 28 01:02:50.675597 kernel: cni0: port 1(veth2fa99606) entered disabled state Apr 28 01:02:50.684045 systemd-networkd[1250]: veth2fa99606: Link UP Apr 28 01:02:50.858017 kernel: veth2fa99606: entered allmulticast mode Apr 28 01:02:50.900372 kernel: veth2fa99606: entered promiscuous mode Apr 28 01:02:51.040129 kernel: cni0: port 1(veth2fa99606) entered blocking state Apr 28 01:02:51.041892 kernel: cni0: port 1(veth2fa99606) entered forwarding state Apr 28 01:02:51.041945 kernel: cni0: port 1(veth2fa99606) entered disabled state Apr 28 01:02:51.041957 kubelet[3069]: E0428 01:02:50.984376 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.217s" Apr 28 01:02:51.302010 kernel: cni0: port 1(veth2fa99606) entered blocking state Apr 28 01:02:51.303292 kernel: cni0: port 1(veth2fa99606) entered forwarding state Apr 28 01:02:51.296941 systemd-networkd[1250]: veth2fa99606: Gained carrier Apr 28 01:02:51.303937 systemd-networkd[1250]: cni0: Gained carrier Apr 28 01:02:51.361376 containerd[1586]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Apr 28 01:02:51.361376 containerd[1586]: delegateAdd: netconf sent to delegate plugin: Apr 28 01:02:51.806902 containerd[1586]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-28T01:02:51.752126547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:02:51.806902 containerd[1586]: time="2026-04-28T01:02:51.800805668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:02:51.806902 containerd[1586]: time="2026-04-28T01:02:51.800882049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:02:51.806902 containerd[1586]: time="2026-04-28T01:02:51.801367374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:02:51.813867 kubelet[3069]: E0428 01:02:51.813778 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:51.874802 containerd[1586]: time="2026-04-28T01:02:51.873656864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-682j5,Uid:5345e0a4-93aa-402f-8137-d129fbd0e8a0,Namespace:kube-system,Attempt:0,}" Apr 28 01:02:52.226191 systemd-networkd[1250]: cni0: Gained IPv6LL Apr 28 01:02:52.274012 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:02:52.404070 systemd-networkd[1250]: veth2fa99606: Gained IPv6LL Apr 28 01:02:52.656100 systemd-networkd[1250]: veth71b8670c: Link UP Apr 28 01:02:52.797156 kernel: cni0: port 2(veth71b8670c) entered blocking state Apr 28 01:02:52.799132 kernel: cni0: port 2(veth71b8670c) entered disabled state Apr 28 01:02:52.799178 kernel: veth71b8670c: entered allmulticast mode Apr 28 01:02:52.808041 kernel: veth71b8670c: entered promiscuous mode Apr 28 01:02:52.900348 kernel: cni0: port 2(veth71b8670c) entered blocking state Apr 28 01:02:52.909538 kernel: cni0: port 2(veth71b8670c) entered forwarding state Apr 28 01:02:52.908934 systemd-networkd[1250]: veth71b8670c: Gained carrier Apr 28 01:02:53.000884 containerd[1586]: time="2026-04-28T01:02:52.997959357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cp225,Uid:6fe59fc6-df4c-450d-a94e-46bc8c8fda1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f21d9cc4c4c0f3604bcd941899ed02de006aabac51ae89a3820d2234da5e81a\"" Apr 28 01:02:53.122957 containerd[1586]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000102950), "name":"cbr0", "type":"bridge"} Apr 28 01:02:53.122957 containerd[1586]: delegateAdd: netconf sent to delegate plugin: Apr 28 01:02:53.135587 kubelet[3069]: E0428 01:02:53.125816 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:54.172828 systemd-networkd[1250]: veth71b8670c: Gained IPv6LL Apr 28 01:02:55.690070 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:48534.service - OpenSSH per-connection server daemon (10.0.0.1:48534). Apr 28 01:02:55.950846 kubelet[3069]: E0428 01:02:55.949478 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.093s" Apr 28 01:02:55.962650 containerd[1586]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-04-28T01:02:55.962600749Z" level=info msg="CreateContainer within sandbox \"5f21d9cc4c4c0f3604bcd941899ed02de006aabac51ae89a3820d2234da5e81a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 01:02:56.027101 containerd[1586]: time="2026-04-28T01:02:56.026949273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:02:56.027101 containerd[1586]: time="2026-04-28T01:02:56.027137500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:02:56.027101 containerd[1586]: time="2026-04-28T01:02:56.027178122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:02:56.037850 containerd[1586]: time="2026-04-28T01:02:56.027550048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:02:56.109637 containerd[1586]: time="2026-04-28T01:02:56.109386213Z" level=info msg="CreateContainer within sandbox \"5f21d9cc4c4c0f3604bcd941899ed02de006aabac51ae89a3820d2234da5e81a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b10c9e366aefc5d4a5624ee6a24d12c61aaaf09f2b96538a77d3794d2d8f2433\"" Apr 28 01:02:56.255658 sshd[3880]: Accepted publickey for core from 10.0.0.1 port 48534 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:02:56.276348 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:02:57.213646 containerd[1586]: time="2026-04-28T01:02:57.213259622Z" level=info msg="StartContainer for \"b10c9e366aefc5d4a5624ee6a24d12c61aaaf09f2b96538a77d3794d2d8f2433\"" Apr 28 01:02:57.233759 systemd-logind[1560]: New session 8 of user core. Apr 28 01:02:57.397994 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 01:02:57.862265 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:02:58.985942 containerd[1586]: time="2026-04-28T01:02:58.983646397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-682j5,Uid:5345e0a4-93aa-402f-8137-d129fbd0e8a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"78793923b90985b0cf01bdbbd983e7ca61cb83f6b54f26b39c7eb0ff1670cfd3\"" Apr 28 01:02:59.142288 kubelet[3069]: E0428 01:02:59.141840 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:02:59.341245 sshd[3880]: pam_unix(sshd:session): session closed for user core Apr 28 01:02:59.350542 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:48534.service: Deactivated successfully. Apr 28 01:02:59.352816 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Apr 28 01:02:59.368111 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 01:02:59.429300 containerd[1586]: time="2026-04-28T01:02:59.427125964Z" level=info msg="StartContainer for \"b10c9e366aefc5d4a5624ee6a24d12c61aaaf09f2b96538a77d3794d2d8f2433\" returns successfully" Apr 28 01:02:59.438496 systemd-logind[1560]: Removed session 8. Apr 28 01:02:59.461817 containerd[1586]: time="2026-04-28T01:02:59.458608609Z" level=info msg="CreateContainer within sandbox \"78793923b90985b0cf01bdbbd983e7ca61cb83f6b54f26b39c7eb0ff1670cfd3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 01:02:59.549938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370004556.mount: Deactivated successfully. Apr 28 01:02:59.552236 containerd[1586]: time="2026-04-28T01:02:59.552143922Z" level=info msg="CreateContainer within sandbox \"78793923b90985b0cf01bdbbd983e7ca61cb83f6b54f26b39c7eb0ff1670cfd3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e3e2c6abd1f4fa8f77f2d898f68008d2e4e740fea2ce2679ca901816de362ee\"" Apr 28 01:02:59.575562 containerd[1586]: time="2026-04-28T01:02:59.575131194Z" level=info msg="StartContainer for \"0e3e2c6abd1f4fa8f77f2d898f68008d2e4e740fea2ce2679ca901816de362ee\"" Apr 28 01:02:59.874835 containerd[1586]: time="2026-04-28T01:02:59.873875052Z" level=info msg="StartContainer for \"0e3e2c6abd1f4fa8f77f2d898f68008d2e4e740fea2ce2679ca901816de362ee\" returns successfully" Apr 28 01:03:00.491928 kubelet[3069]: E0428 01:03:00.491717 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:00.498891 kubelet[3069]: E0428 01:03:00.498811 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:00.520341 kubelet[3069]: I0428 01:03:00.519958 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cp225" podStartSLOduration=50.519944896 podStartE2EDuration="50.519944896s" podCreationTimestamp="2026-04-28 01:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:03:00.518838517 +0000 UTC m=+80.241978256" watchObservedRunningTime="2026-04-28 01:03:00.519944896 +0000 UTC m=+80.243084635" Apr 28 01:03:00.633659 kubelet[3069]: I0428 01:03:00.633348 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-682j5" podStartSLOduration=50.633333804 podStartE2EDuration="50.633333804s" podCreationTimestamp="2026-04-28 01:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:03:00.632925019 +0000 UTC m=+80.356064759" watchObservedRunningTime="2026-04-28 01:03:00.633333804 +0000 UTC m=+80.356473543" Apr 28 01:03:01.511561 kubelet[3069]: E0428 01:03:01.511167 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:01.511561 kubelet[3069]: E0428 01:03:01.511324 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:02.753157 kubelet[3069]: E0428 01:03:02.751390 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:02.777474 kubelet[3069]: E0428 01:03:02.751513 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:04.657582 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:33914.service - OpenSSH per-connection server daemon (10.0.0.1:33914). Apr 28 01:03:05.793819 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 33914 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:03:05.810661 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:03:05.982583 systemd-logind[1560]: New session 9 of user core. Apr 28 01:03:05.999260 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 01:03:09.436353 kubelet[3069]: E0428 01:03:09.430249 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.309s" Apr 28 01:03:09.846775 kubelet[3069]: E0428 01:03:09.846694 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:10.089525 kubelet[3069]: E0428 01:03:10.088315 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:03:10.954675 kubelet[3069]: E0428 01:03:10.953937 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.11s" Apr 28 01:03:19.816839 sshd[4063]: pam_unix(sshd:session): session closed for user core Apr 28 01:03:21.305921 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:33914.service: Deactivated successfully. Apr 28 01:03:22.008036 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 01:03:22.409991 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Apr 28 01:03:23.564220 systemd-logind[1560]: Removed session 9. Apr 28 01:03:26.316189 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:39336.service - OpenSSH per-connection server daemon (10.0.0.1:39336). Apr 28 01:03:39.679685 update_engine[1562]: I20260428 01:03:39.656579 1562 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 28 01:03:41.403963 update_engine[1562]: I20260428 01:03:39.710738 1562 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 28 01:03:41.403963 update_engine[1562]: I20260428 01:03:41.055115 1562 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 28 01:03:43.549302 update_engine[1562]: I20260428 01:03:42.033635 1562 omaha_request_params.cc:62] Current group set to lts Apr 28 01:03:43.549302 update_engine[1562]: I20260428 01:03:42.613085 1562 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 28 01:03:43.549302 update_engine[1562]: I20260428 01:03:42.764392 1562 update_attempter.cc:643] Scheduling an action processor start. Apr 28 01:03:43.549302 update_engine[1562]: I20260428 01:03:42.804745 1562 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 01:03:43.549302 update_engine[1562]: I20260428 01:03:43.266588 1562 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 28 01:03:48.845247 locksmithd[1636]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 28 01:03:49.166929 containerd[1586]: time="2026-04-28T01:03:47.776115777Z" level=error msg="post event" error="context deadline exceeded" Apr 28 01:03:49.300732 update_engine[1562]: I20260428 01:03:43.788375 1562 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 01:03:49.300732 update_engine[1562]: I20260428 01:03:44.018765 1562 omaha_request_action.cc:272] Request: Apr 28 01:03:49.300732 update_engine[1562]: Apr 28 01:03:49.300732 update_engine[1562]: Apr 28 01:03:49.300732 update_engine[1562]: Apr 28 01:03:49.300732 update_engine[1562]: Apr 28 01:03:49.300732 update_engine[1562]: Apr 28 01:03:49.300732 update_engine[1562]: Apr 28 01:03:49.300732 update_engine[1562]: Apr 28 01:03:49.300732 update_engine[1562]: Apr 28 01:03:49.300732 update_engine[1562]: I20260428 01:03:44.366930 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:03:49.300732 update_engine[1562]: I20260428 01:03:47.858801 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:03:49.300732 update_engine[1562]: E20260428 01:03:48.666025 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:03:49.300732 update_engine[1562]: I20260428 01:03:48.896347 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 28 01:03:50.047504 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 39336 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:03:49.943900 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:03:51.068500 containerd[1586]: time="2026-04-28T01:03:50.297083959Z" level=error msg="get state for ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de" error="context deadline exceeded: unknown" Apr 28 01:03:51.077924 containerd[1586]: time="2026-04-28T01:03:51.072860386Z" level=warning msg="unknown status" status=0 Apr 28 01:03:51.077924 containerd[1586]: time="2026-04-28T01:03:50.677216808Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 28 01:03:51.097364 containerd[1586]: time="2026-04-28T01:03:51.095214645Z" level=error msg="ttrpc: received message on inactive stream" stream=17 Apr 28 01:03:51.375799 kubelet[3069]: E0428 01:03:51.359402 3069 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 01:03:52.047262 systemd-logind[1560]: New session 10 of user core. Apr 28 01:03:52.084402 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 01:03:54.552889 containerd[1586]: time="2026-04-28T01:03:54.552718698Z" level=error msg="failed to handle container TaskExit event container_id:\"ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de\" id:\"ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de\" pid:2954 exit_status:1 exited_at:{seconds:1777338220 nanos:952879675}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:03:54.606363 containerd[1586]: time="2026-04-28T01:03:54.605655352Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 28 01:03:54.609937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de-rootfs.mount: Deactivated successfully. Apr 28 01:03:54.709803 containerd[1586]: time="2026-04-28T01:03:54.674804936Z" level=info msg="shim disconnected" id=68453dd1112c3f293ee3f372c3448266eb9dce79c196792b9a160992611e1fa3 namespace=k8s.io Apr 28 01:03:54.709803 containerd[1586]: time="2026-04-28T01:03:54.675201399Z" level=warning msg="cleaning up after shim disconnected" id=68453dd1112c3f293ee3f372c3448266eb9dce79c196792b9a160992611e1fa3 namespace=k8s.io Apr 28 01:03:54.709803 containerd[1586]: time="2026-04-28T01:03:54.675322108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:03:54.999574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68453dd1112c3f293ee3f372c3448266eb9dce79c196792b9a160992611e1fa3-rootfs.mount: Deactivated successfully. Apr 28 01:03:56.258699 containerd[1586]: time="2026-04-28T01:03:56.208168720Z" level=warning msg="cleanup warnings time=\"2026-04-28T01:03:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 01:03:57.296191 containerd[1586]: time="2026-04-28T01:03:56.510517168Z" level=info msg="TaskExit event container_id:\"ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de\" id:\"ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de\" pid:2954 exit_status:1 exited_at:{seconds:1777338220 nanos:952879675}" Apr 28 01:03:58.231687 kubelet[3069]: E0428 01:03:58.230793 3069 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 28 01:03:59.537752 update_engine[1562]: I20260428 01:03:59.484196 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:03:59.625907 update_engine[1562]: I20260428 01:03:59.595317 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:03:59.625907 update_engine[1562]: I20260428 01:03:59.618562 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:03:59.663636 update_engine[1562]: E20260428 01:03:59.662854 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:03:59.664198 update_engine[1562]: I20260428 01:03:59.664072 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 28 01:04:00.120251 containerd[1586]: time="2026-04-28T01:04:00.116364897Z" level=info msg="shim disconnected" id=ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de namespace=k8s.io Apr 28 01:04:00.166083 containerd[1586]: time="2026-04-28T01:04:00.131239158Z" level=warning msg="cleaning up after shim disconnected" id=ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de namespace=k8s.io Apr 28 01:04:00.166083 containerd[1586]: time="2026-04-28T01:04:00.131648291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:04:00.915017 kubelet[3069]: E0428 01:04:00.914718 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="49.094s" Apr 28 01:04:09.565905 update_engine[1562]: I20260428 01:04:09.531764 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:04:10.347007 update_engine[1562]: I20260428 01:04:09.947988 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:04:10.347007 update_engine[1562]: I20260428 01:04:09.952486 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:04:10.595605 update_engine[1562]: E20260428 01:04:10.151232 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:04:10.595605 update_engine[1562]: I20260428 01:04:10.386699 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 28 01:04:10.962860 sshd[4122]: pam_unix(sshd:session): session closed for user core Apr 28 01:04:11.354333 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:36018.service - OpenSSH per-connection server daemon (10.0.0.1:36018). Apr 28 01:04:11.415496 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:39336.service: Deactivated successfully. Apr 28 01:04:11.540552 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 01:04:11.881151 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Apr 28 01:04:12.297293 systemd-logind[1560]: Removed session 10. Apr 28 01:04:13.639651 kubelet[3069]: E0428 01:04:13.639048 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.643s" Apr 28 01:04:14.125169 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 36018 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:04:14.641287 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:04:17.620357 systemd-logind[1560]: New session 11 of user core. Apr 28 01:04:18.610352 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 01:04:20.755744 update_engine[1562]: I20260428 01:04:20.611310 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:04:22.238958 update_engine[1562]: I20260428 01:04:21.878680 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:04:23.019340 update_engine[1562]: I20260428 01:04:22.990119 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:04:23.420538 update_engine[1562]: E20260428 01:04:23.016357 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:04:23.420538 update_engine[1562]: I20260428 01:04:23.359633 1562 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:23.677059 1562 omaha_request_action.cc:617] Omaha request response: Apr 28 01:04:26.336285 update_engine[1562]: E20260428 01:04:23.814399 1562 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:24.260039 1562 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:24.556012 1562 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:24.563835 1562 update_attempter.cc:306] Processing Done. Apr 28 01:04:26.336285 update_engine[1562]: E20260428 01:04:24.570111 1562 update_attempter.cc:619] Update failed. Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:24.570883 1562 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:24.570899 1562 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:24.582379 1562 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:25.314852 1562 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:25.694272 1562 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:25.695003 1562 omaha_request_action.cc:272] Request: Apr 28 01:04:26.336285 update_engine[1562]: Apr 28 01:04:26.336285 update_engine[1562]: Apr 28 01:04:26.336285 update_engine[1562]: Apr 28 01:04:26.336285 update_engine[1562]: Apr 28 01:04:26.336285 update_engine[1562]: Apr 28 01:04:26.336285 update_engine[1562]: Apr 28 01:04:26.336285 update_engine[1562]: I20260428 01:04:25.695050 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 28 01:04:37.886034 update_engine[1562]: I20260428 01:04:26.669339 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 28 01:04:37.886034 update_engine[1562]: I20260428 01:04:26.877205 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 28 01:04:37.886034 update_engine[1562]: E20260428 01:04:26.962845 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 28 01:04:37.886034 update_engine[1562]: I20260428 01:04:27.063170 1562 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 28 01:04:37.886034 update_engine[1562]: I20260428 01:04:27.080216 1562 omaha_request_action.cc:617] Omaha request response: Apr 28 01:04:37.886034 update_engine[1562]: I20260428 01:04:27.104981 1562 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 01:04:37.886034 update_engine[1562]: I20260428 01:04:27.105190 1562 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 28 01:04:37.886034 update_engine[1562]: I20260428 01:04:27.105193 1562 update_attempter.cc:306] Processing Done. Apr 28 01:04:37.886034 update_engine[1562]: I20260428 01:04:27.105233 1562 update_attempter.cc:310] Error event sent. Apr 28 01:04:37.886034 update_engine[1562]: I20260428 01:04:27.210641 1562 update_check_scheduler.cc:74] Next update check in 45m42s Apr 28 01:04:40.101379 locksmithd[1636]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 28 01:04:40.101379 locksmithd[1636]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 28 01:04:49.985604 kubelet[3069]: E0428 01:04:49.967295 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:04:53.071988 kubelet[3069]: I0428 01:04:53.069296 3069 scope.go:117] "RemoveContainer" containerID="68453dd1112c3f293ee3f372c3448266eb9dce79c196792b9a160992611e1fa3" Apr 28 01:04:53.852039 kubelet[3069]: E0428 01:04:53.848627 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:04:53.878217 kubelet[3069]: E0428 01:04:53.863973 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:04:53.878217 kubelet[3069]: E0428 01:04:53.864085 3069 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:04:55.126063 kubelet[3069]: E0428 01:04:55.085300 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:04:57.941569 kubelet[3069]: E0428 01:04:57.657219 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:04:58.276241 kubelet[3069]: E0428 01:04:57.069991 3069 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode9ca41790ae21be9f4cbd451ade0acec/ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de: task ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de not found Apr 28 01:04:58.773405 containerd[1586]: time="2026-04-28T01:04:58.771346129Z" level=info msg="CreateContainer within sandbox \"f982d6cbbc6dc2f50514182e4e7425d1dedaf958dd13f701ace907043e423a19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 28 01:04:58.965697 kubelet[3069]: E0428 01:04:58.961376 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="45.168s" Apr 28 01:04:58.992986 kubelet[3069]: I0428 01:04:58.991304 3069 scope.go:117] "RemoveContainer" containerID="ab10d805dac85f7dd4ed7f82eab798358e506b8bc5eab255262c65796f4a68de" Apr 28 01:04:59.158261 kubelet[3069]: E0428 01:04:59.156467 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:04:59.158261 kubelet[3069]: I0428 01:04:59.156573 3069 scope.go:117] "RemoveContainer" containerID="94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7" Apr 28 01:04:59.160123 kubelet[3069]: E0428 01:04:59.160099 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:04:59.266415 sshd[4276]: pam_unix(sshd:session): session closed for user core Apr 28 01:04:59.688787 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:36018.service: Deactivated successfully. Apr 28 01:04:59.787291 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 01:04:59.853479 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Apr 28 01:04:59.864598 containerd[1586]: time="2026-04-28T01:04:59.858599996Z" level=info msg="RemoveContainer for \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\"" Apr 28 01:04:59.864598 containerd[1586]: time="2026-04-28T01:04:59.863640673Z" level=info msg="CreateContainer within sandbox \"f982d6cbbc6dc2f50514182e4e7425d1dedaf958dd13f701ace907043e423a19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\"" Apr 28 01:04:59.885898 containerd[1586]: time="2026-04-28T01:04:59.882686454Z" level=info msg="CreateContainer within sandbox \"2ac400876d35f772f1fdc2675a5271f5dd8eb62cd0a69b024b6ed75bec901e74\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 28 01:04:59.883845 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:54384.service - OpenSSH per-connection server daemon (10.0.0.1:54384). Apr 28 01:04:59.886648 systemd-logind[1560]: Removed session 11. Apr 28 01:04:59.887756 kubelet[3069]: I0428 01:04:59.887584 3069 scope.go:117] "RemoveContainer" containerID="94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7" Apr 28 01:04:59.890747 containerd[1586]: time="2026-04-28T01:04:59.886116120Z" level=info msg="RemoveContainer for \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\" returns successfully" Apr 28 01:04:59.890747 containerd[1586]: time="2026-04-28T01:04:59.888758521Z" level=info msg="StartContainer for \"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\"" Apr 28 01:04:59.891821 containerd[1586]: time="2026-04-28T01:04:59.888858509Z" level=error msg="ContainerStatus for \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\": not found" Apr 28 01:04:59.892565 kubelet[3069]: E0428 01:04:59.892539 3069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\": not found" containerID="94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7" Apr 28 01:04:59.892917 kubelet[3069]: I0428 01:04:59.892816 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7"} err="failed to get container status \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\": rpc error: code = NotFound desc = an error occurred when try to find container \"94be8f9ac1ac3d99e261bf5f4907e4b72a6d1941038c3cd31f2f68f6eefe6ed7\": not found" Apr 28 01:05:00.467802 kubelet[3069]: E0428 01:05:00.466742 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:00.604718 kubelet[3069]: E0428 01:05:00.601479 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:00.880678 containerd[1586]: time="2026-04-28T01:05:00.867362580Z" level=info msg="CreateContainer within sandbox \"2ac400876d35f772f1fdc2675a5271f5dd8eb62cd0a69b024b6ed75bec901e74\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\"" Apr 28 01:05:01.247372 containerd[1586]: time="2026-04-28T01:05:01.238677875Z" level=info msg="StartContainer for \"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\"" Apr 28 01:05:01.436296 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 54384 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:05:01.459395 sshd[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:05:01.734934 systemd-logind[1560]: New session 12 of user core. Apr 28 01:05:01.879533 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 01:05:02.773803 containerd[1586]: time="2026-04-28T01:05:02.765093304Z" level=info msg="StartContainer for \"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" returns successfully" Apr 28 01:05:04.393645 containerd[1586]: time="2026-04-28T01:05:04.393257048Z" level=info msg="StartContainer for \"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" returns successfully" Apr 28 01:05:31.941185 systemd-journald[1175]: Under memory pressure, flushing caches. Apr 28 01:05:31.389178 systemd-resolved[1463]: Under memory pressure, flushing caches. Apr 28 01:05:33.469273 systemd-journald[1175]: Under memory pressure, flushing caches. Apr 28 01:05:32.839994 systemd-resolved[1463]: Flushed all caches. Apr 28 01:05:33.092764 systemd-resolved[1463]: Under memory pressure, flushing caches. Apr 28 01:05:33.128898 systemd-resolved[1463]: Flushed all caches. Apr 28 01:05:52.888079 sshd[4373]: pam_unix(sshd:session): session closed for user core Apr 28 01:05:53.883054 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:54384.service: Deactivated successfully. Apr 28 01:05:54.418387 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 01:05:54.810849 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Apr 28 01:05:56.983696 systemd-logind[1560]: Removed session 12. Apr 28 01:05:57.251551 kubelet[3069]: E0428 01:05:57.250336 3069 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod33fee6ba1581201eda98a989140db110/68453dd1112c3f293ee3f372c3448266eb9dce79c196792b9a160992611e1fa3: task 68453dd1112c3f293ee3f372c3448266eb9dce79c196792b9a160992611e1fa3 not found Apr 28 01:05:57.257489 kubelet[3069]: E0428 01:05:57.252158 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:05:59.507014 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:36172.service - OpenSSH per-connection server daemon (10.0.0.1:36172). Apr 28 01:05:59.648458 kubelet[3069]: E0428 01:05:59.579925 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="52.702s" Apr 28 01:06:02.362057 kubelet[3069]: E0428 01:06:02.361299 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.771s" Apr 28 01:06:02.363371 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 36172 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:06:02.364344 kubelet[3069]: E0428 01:06:02.364237 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:02.366143 sshd[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:02.446983 kubelet[3069]: E0428 01:06:02.423303 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:02.784588 systemd-logind[1560]: New session 13 of user core. Apr 28 01:06:02.963091 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 01:06:03.085819 kubelet[3069]: E0428 01:06:03.078613 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:03.813804 kubelet[3069]: E0428 01:06:03.813122 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:04.154802 kubelet[3069]: E0428 01:06:04.145607 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:04.217713 kubelet[3069]: E0428 01:06:04.156407 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:04.217713 kubelet[3069]: E0428 01:06:04.168482 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:05.094734 sshd[4507]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:05.180060 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:36172.service: Deactivated successfully. Apr 28 01:06:05.232479 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 01:06:05.244158 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Apr 28 01:06:05.288586 kubelet[3069]: E0428 01:06:05.285076 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:05.293011 systemd-logind[1560]: Removed session 13. Apr 28 01:06:06.959342 kubelet[3069]: E0428 01:06:06.958976 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:07.452975 kubelet[3069]: E0428 01:06:07.441631 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:08.464668 kubelet[3069]: E0428 01:06:08.464260 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:10.196483 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:45448.service - OpenSSH per-connection server daemon (10.0.0.1:45448). Apr 28 01:06:10.748507 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 45448 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:06:10.763211 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:10.840863 systemd-logind[1560]: New session 14 of user core. Apr 28 01:06:10.878800 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 01:06:11.797837 sshd[4560]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:11.850761 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:45448.service: Deactivated successfully. Apr 28 01:06:11.883375 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 01:06:11.884805 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Apr 28 01:06:11.886903 systemd-logind[1560]: Removed session 14. Apr 28 01:06:12.620481 kubelet[3069]: E0428 01:06:12.619105 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:12.740985 kubelet[3069]: E0428 01:06:12.740640 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:18.264193 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:45458.service - OpenSSH per-connection server daemon (10.0.0.1:45458). Apr 28 01:06:35.224249 sshd[4604]: Accepted publickey for core from 10.0.0.1 port 45458 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:06:35.443099 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:36.085204 systemd-logind[1560]: New session 15 of user core. Apr 28 01:06:36.114109 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 01:06:38.964414 kubelet[3069]: E0428 01:06:38.959991 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.094s" Apr 28 01:06:39.707868 kubelet[3069]: E0428 01:06:39.707720 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:39.772705 kubelet[3069]: E0428 01:06:39.723543 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:06:43.574657 kubelet[3069]: E0428 01:06:43.561932 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.346s" Apr 28 01:06:46.390322 sshd[4604]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:47.209809 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:45458.service: Deactivated successfully. Apr 28 01:06:47.578336 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 01:06:47.614045 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Apr 28 01:06:47.621817 systemd-logind[1560]: Removed session 15. Apr 28 01:06:47.719930 kubelet[3069]: E0428 01:06:47.718228 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.917s" Apr 28 01:06:51.964146 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:49634.service - OpenSSH per-connection server daemon (10.0.0.1:49634). Apr 28 01:06:53.481570 kubelet[3069]: E0428 01:06:53.478495 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.665s" Apr 28 01:06:53.911881 sshd[4696]: Accepted publickey for core from 10.0.0.1 port 49634 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:06:53.946648 sshd[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:54.160569 systemd-logind[1560]: New session 16 of user core. Apr 28 01:06:54.283121 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 01:06:55.738187 sshd[4696]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:55.764326 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:49650.service - OpenSSH per-connection server daemon (10.0.0.1:49650). Apr 28 01:06:55.776785 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:49634.service: Deactivated successfully. Apr 28 01:06:55.909221 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 01:06:56.001068 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Apr 28 01:06:56.011664 systemd-logind[1560]: Removed session 16. Apr 28 01:06:56.542669 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 49650 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:06:56.610534 sshd[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:56.991315 systemd-logind[1560]: New session 17 of user core. Apr 28 01:06:57.062890 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 01:06:59.754490 sshd[4722]: pam_unix(sshd:session): session closed for user core Apr 28 01:06:59.774053 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:43664.service - OpenSSH per-connection server daemon (10.0.0.1:43664). Apr 28 01:06:59.776317 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:49650.service: Deactivated successfully. Apr 28 01:06:59.778484 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 01:06:59.780122 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Apr 28 01:06:59.781241 systemd-logind[1560]: Removed session 17. Apr 28 01:06:59.869651 sshd[4754]: Accepted publickey for core from 10.0.0.1 port 43664 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:06:59.878737 sshd[4754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:06:59.909348 systemd-logind[1560]: New session 18 of user core. Apr 28 01:06:59.936254 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 01:07:07.508831 sshd[4754]: pam_unix(sshd:session): session closed for user core Apr 28 01:07:07.917880 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:43680.service - OpenSSH per-connection server daemon (10.0.0.1:43680). Apr 28 01:07:08.779845 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:43664.service: Deactivated successfully. Apr 28 01:07:08.992221 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 01:07:09.065200 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Apr 28 01:07:09.242093 kubelet[3069]: E0428 01:07:09.241360 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.282s" Apr 28 01:07:09.259120 systemd-logind[1560]: Removed session 18. Apr 28 01:07:12.993293 sshd[4793]: Accepted publickey for core from 10.0.0.1 port 43680 ssh2: RSA SHA256:h1NhNWfC+Dmp6wIIzeQOuTKnwsIBe3BN0LKeEqeOidc Apr 28 01:07:14.273251 kubelet[3069]: E0428 01:07:13.218955 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.322s" Apr 28 01:07:14.881392 sshd[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:07:15.115041 kubelet[3069]: E0428 01:07:15.114913 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:07:18.580525 systemd-logind[1560]: New session 19 of user core. Apr 28 01:07:18.802265 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 01:07:48.977160 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 28 01:08:00.910401 containerd[1586]: time="2026-04-28T01:07:59.468955261Z" level=error msg="post event" error="context deadline exceeded" Apr 28 01:08:07.397128 containerd[1586]: time="2026-04-28T01:08:07.022856493Z" level=error msg="forward event" error="context deadline exceeded" Apr 28 01:08:09.812201 containerd[1586]: time="2026-04-28T01:08:09.168527688Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Apr 28 01:08:12.537693 containerd[1586]: time="2026-04-28T01:08:11.144353288Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 28 01:08:14.238264 containerd[1586]: time="2026-04-28T01:08:14.182048043Z" level=error msg="get state for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="context deadline exceeded: unknown" Apr 28 01:08:14.238264 containerd[1586]: time="2026-04-28T01:08:14.185005246Z" level=warning msg="unknown status" status=0 Apr 28 01:08:14.238264 containerd[1586]: time="2026-04-28T01:08:14.190212515Z" level=error msg="failed to handle container TaskExit event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686}" error="failed to stop container: context deadline exceeded: unknown" Apr 28 01:08:14.238264 containerd[1586]: time="2026-04-28T01:08:14.190171452Z" level=error msg="get state for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="context deadline exceeded: unknown" Apr 28 01:08:14.238264 containerd[1586]: time="2026-04-28T01:08:14.197797896Z" level=warning msg="unknown status" status=0 Apr 28 01:08:14.238264 containerd[1586]: time="2026-04-28T01:08:14.184005389Z" level=error msg="ttrpc: received message on inactive stream" stream=23 Apr 28 01:08:16.978619 containerd[1586]: time="2026-04-28T01:08:14.681844785Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 28 01:08:17.951791 containerd[1586]: time="2026-04-28T01:08:17.949080045Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Apr 28 01:08:21.677311 containerd[1586]: time="2026-04-28T01:08:21.676387224Z" level=error msg="ttrpc: received message on inactive stream" stream=25 Apr 28 01:08:30.439157 kubelet[3069]: I0428 01:08:30.437742 3069 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:08:32.366005 systemd-tmpfiles[4837]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 01:08:33.009932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9-rootfs.mount: Deactivated successfully. Apr 28 01:08:33.188218 systemd-tmpfiles[4837]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 01:08:33.767346 containerd[1586]: time="2026-04-28T01:08:33.640702839Z" level=info msg="TaskExit event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686}" Apr 28 01:08:35.181278 containerd[1586]: time="2026-04-28T01:08:35.165498100Z" level=error msg="failed to handle container TaskExit event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:08:35.376004 systemd-tmpfiles[4837]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 01:08:35.558659 systemd-tmpfiles[4837]: ACLs are not supported, ignoring. Apr 28 01:08:35.565946 systemd-tmpfiles[4837]: ACLs are not supported, ignoring. Apr 28 01:08:36.708224 kubelet[3069]: I0428 01:08:36.587763 3069 reflector.go:556] "Warning: watch ended with error" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:08:36.748005 containerd[1586]: time="2026-04-28T01:08:36.742411387Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:08:36.748005 containerd[1586]: time="2026-04-28T01:08:36.744225783Z" level=warning msg="unknown status" status=0 Apr 28 01:08:38.178031 containerd[1586]: time="2026-04-28T01:08:37.197816214Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Apr 28 01:08:38.959968 containerd[1586]: time="2026-04-28T01:08:38.958335248Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:08:39.005013 containerd[1586]: time="2026-04-28T01:08:38.994987791Z" level=warning msg="unknown status" status=0 Apr 28 01:08:39.069805 containerd[1586]: time="2026-04-28T01:08:38.353372251Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 28 01:08:39.157001 systemd-tmpfiles[4837]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 01:08:39.178917 systemd-tmpfiles[4837]: Skipping /boot Apr 28 01:08:40.908411 containerd[1586]: time="2026-04-28T01:08:40.857967853Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 28 01:08:42.006355 kubelet[3069]: I0428 01:08:35.637967 3069 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:08:42.417018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b-rootfs.mount: Deactivated successfully. Apr 28 01:08:45.277983 containerd[1586]: time="2026-04-28T01:08:45.265226071Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 28 01:08:46.318334 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 28 01:08:46.347399 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 28 01:08:48.511022 kubelet[3069]: I0428 01:08:47.084720 3069 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:08:48.511022 kubelet[3069]: I0428 01:08:47.095614 3069 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:08:49.750358 containerd[1586]: time="2026-04-28T01:08:45.918601424Z" level=error msg="Failed to handle backOff event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686} for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:08:49.750358 containerd[1586]: time="2026-04-28T01:08:47.291373170Z" level=info msg="TaskExit event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808}" Apr 28 01:08:52.709159 containerd[1586]: time="2026-04-28T01:08:51.621015489Z" level=error msg="get state for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="context deadline exceeded: unknown" Apr 28 01:08:55.040789 containerd[1586]: time="2026-04-28T01:08:51.556009921Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 28 01:08:56.246172 containerd[1586]: time="2026-04-28T01:08:53.672678326Z" level=warning msg="unknown status" status=0 Apr 28 01:09:08.054926 kubelet[3069]: I0428 01:08:58.984255 3069 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:09:13.687412 containerd[1586]: time="2026-04-28T01:09:13.684927554Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Apr 28 01:09:21.981038 kubelet[3069]: I0428 01:09:21.980343 3069 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:09:34.775634 containerd[1586]: time="2026-04-28T01:09:34.648089084Z" level=error msg="Failed to handle backOff event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808} for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:09:43.219773 containerd[1586]: time="2026-04-28T01:09:43.061935656Z" level=info msg="TaskExit event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686}" Apr 28 01:09:52.052328 kubelet[3069]: W0428 01:09:13.612371 3069 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/systemd-tmpfiles-clean.service": 0x40000100 == IN_CREATE|IN_ISDIR): open /sys/fs/cgroup/pids/system.slice/systemd-tmpfiles-clean.service: no such file or directory Apr 28 01:09:54.170831 containerd[1586]: time="2026-04-28T01:09:52.483313903Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Apr 28 01:09:54.170831 containerd[1586]: time="2026-04-28T01:09:52.969269200Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:09:55.485901 containerd[1586]: time="2026-04-28T01:09:54.778594012Z" level=warning msg="unknown status" status=0 Apr 28 01:10:04.427938 kubelet[3069]: E0428 01:10:04.418981 3069 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:10:10.585556 containerd[1586]: time="2026-04-28T01:10:09.163205800Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:10:10.585556 containerd[1586]: time="2026-04-28T01:10:10.583098068Z" level=warning msg="unknown status" status=0 Apr 28 01:10:10.585556 containerd[1586]: time="2026-04-28T01:10:10.585628103Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 28 01:10:14.641052 containerd[1586]: time="2026-04-28T01:10:14.537046002Z" level=error msg="Failed to handle backOff event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686} for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:10:16.103728 containerd[1586]: time="2026-04-28T01:10:16.051710597Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 28 01:10:19.093101 containerd[1586]: time="2026-04-28T01:10:19.078221673Z" level=info msg="TaskExit event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808}" Apr 28 01:10:20.901156 kubelet[3069]: I0428 01:10:05.653758 3069 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:10:33.075511 containerd[1586]: time="2026-04-28T01:10:33.063756449Z" level=error msg="Failed to handle backOff event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808} for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 28 01:10:35.452198 containerd[1586]: time="2026-04-28T01:10:34.001164214Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 28 01:10:40.231529 containerd[1586]: time="2026-04-28T01:10:35.208246555Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 28 01:10:42.983071 containerd[1586]: time="2026-04-28T01:10:41.301135096Z" level=info msg="TaskExit event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686}" Apr 28 01:10:45.043664 containerd[1586]: time="2026-04-28T01:10:45.042610097Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:10:45.467364 kubelet[3069]: E0428 01:10:45.044386 3069 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 28 01:10:46.185736 containerd[1586]: time="2026-04-28T01:10:45.043388873Z" level=warning msg="unknown status" status=0 Apr 28 01:10:46.185736 containerd[1586]: time="2026-04-28T01:10:45.865848284Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 28 01:10:46.185736 containerd[1586]: time="2026-04-28T01:10:45.866774404Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:10:46.185736 containerd[1586]: time="2026-04-28T01:10:45.866798651Z" level=warning msg="unknown status" status=0 Apr 28 01:10:46.185736 containerd[1586]: time="2026-04-28T01:10:45.867177027Z" level=error msg="Failed to handle backOff event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686} for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:10:46.185736 containerd[1586]: time="2026-04-28T01:10:45.867254481Z" level=info msg="TaskExit event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808}" Apr 28 01:10:50.189159 kubelet[3069]: I0428 01:10:03.583390 3069 reflector.go:556] "Warning: watch ended with error" reflector="object-\"kube-flannel\"/\"kube-flannel-cfg\"" type="*v1.ConfigMap" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:10:51.601222 containerd[1586]: time="2026-04-28T01:10:49.457052509Z" level=error msg="get state for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="context deadline exceeded: unknown" Apr 28 01:10:54.117165 containerd[1586]: time="2026-04-28T01:10:51.748016822Z" level=warning msg="unknown status" status=0 Apr 28 01:10:56.547287 containerd[1586]: time="2026-04-28T01:10:51.748029756Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 28 01:10:58.260154 containerd[1586]: time="2026-04-28T01:10:51.748033373Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 28 01:10:58.865287 containerd[1586]: time="2026-04-28T01:10:55.268107105Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 28 01:11:05.220362 kubelet[3069]: E0428 01:10:47.211979 3069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="0fc351b6e5c3f6c51a95a275ea55a3211e3436113ed6408d31315abd973d665a" Apr 28 01:11:07.747827 kubelet[3069]: E0428 01:11:04.988012 3069 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Apr 28 01:11:07.980413 containerd[1586]: time="2026-04-28T01:11:07.366875076Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 28 01:11:08.138191 containerd[1586]: time="2026-04-28T01:11:07.711164321Z" level=error msg="get state for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="context deadline exceeded: unknown" Apr 28 01:11:08.138191 containerd[1586]: time="2026-04-28T01:11:08.065244678Z" level=warning msg="unknown status" status=0 Apr 28 01:11:10.949385 containerd[1586]: time="2026-04-28T01:11:10.932221908Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 28 01:11:11.787041 containerd[1586]: time="2026-04-28T01:11:11.700393159Z" level=error msg="Failed to handle backOff event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808} for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:11:12.883046 containerd[1586]: time="2026-04-28T01:11:12.407206389Z" level=info msg="TaskExit event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686}" Apr 28 01:11:23.621987 kubelet[3069]: E0428 01:11:06.597120 3069 kubelet.go:3102] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:11:26.757983 containerd[1586]: time="2026-04-28T01:11:26.527658616Z" level=error msg="Failed to handle backOff event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686} for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 28 01:11:27.865347 containerd[1586]: time="2026-04-28T01:11:27.855090486Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 28 01:11:29.365354 containerd[1586]: time="2026-04-28T01:11:28.172892810Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 28 01:11:29.365354 containerd[1586]: time="2026-04-28T01:11:28.176887501Z" level=info msg="TaskExit event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808}" Apr 28 01:11:32.860818 kubelet[3069]: I0428 01:11:16.114606 3069 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 28 01:11:38.701384 containerd[1586]: time="2026-04-28T01:11:38.690656551Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 28 01:11:38.701384 containerd[1586]: time="2026-04-28T01:11:38.691111252Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 28 01:11:40.100149 containerd[1586]: time="2026-04-28T01:11:38.895973885Z" level=error msg="Failed to handle backOff event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808} for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 28 01:11:43.877475 containerd[1586]: time="2026-04-28T01:11:43.860497785Z" level=info msg="TaskExit event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686}" Apr 28 01:11:45.589296 kubelet[3069]: E0428 01:11:05.607273 3069 container_log_manager.go:274] "Failed to get container status" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" worker=1 containerID="0fc351b6e5c3f6c51a95a275ea55a3211e3436113ed6408d31315abd973d665a" Apr 28 01:11:56.405338 kubelet[3069]: E0428 01:11:46.246143 3069 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 28 01:11:57.553180 containerd[1586]: time="2026-04-28T01:11:57.261313524Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 28 01:11:57.553180 containerd[1586]: time="2026-04-28T01:11:57.361214059Z" level=error msg="Failed to handle backOff event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686} for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 28 01:11:57.553180 containerd[1586]: time="2026-04-28T01:11:57.372139406Z" level=info msg="TaskExit event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808}" Apr 28 01:12:00.145170 kubelet[3069]: E0428 01:11:47.085077 3069 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:12:01.825322 containerd[1586]: time="2026-04-28T01:12:01.458093215Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 28 01:12:03.407753 kubelet[3069]: E0428 01:12:03.399918 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:12:04.333969 kubelet[3069]: E0428 01:11:34.998823 3069 kuberuntime_container.go:540] "ListContainers failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:12:09.324524 containerd[1586]: time="2026-04-28T01:12:08.720660935Z" level=error msg="Failed to handle backOff event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808} for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown" Apr 28 01:12:10.769616 containerd[1586]: time="2026-04-28T01:12:10.352329924Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 28 01:12:14.065810 kubelet[3069]: E0428 01:11:48.750992 3069 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 01:12:22.470098 kubelet[3069]: I0428 01:12:06.971015 3069 image_gc_manager.go:230] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:12:24.175698 kubelet[3069]: E0428 01:12:19.116203 3069 kuberuntime_sandbox.go:294] "Failed to list pod sandboxes" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:12:34.968216 kubelet[3069]: E0428 01:12:28.451288 3069 generic.go:256] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:12:35.884225 containerd[1586]: time="2026-04-28T01:12:34.556368093Z" level=info msg="TaskExit event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686}" Apr 28 01:12:45.291667 containerd[1586]: time="2026-04-28T01:12:45.198945044Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:12:48.094933 containerd[1586]: time="2026-04-28T01:12:45.304799038Z" level=warning msg="unknown status" status=0 Apr 28 01:12:52.502220 containerd[1586]: time="2026-04-28T01:12:52.447258911Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 28 01:12:54.821100 kubelet[3069]: E0428 01:12:21.487902 3069 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.21:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=500\": dial tcp 10.0.0.21:6443: i/o timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 28 01:13:03.406400 containerd[1586]: time="2026-04-28T01:13:03.398735397Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:13:03.406400 containerd[1586]: time="2026-04-28T01:13:03.399171845Z" level=warning msg="unknown status" status=0 Apr 28 01:13:07.889200 containerd[1586]: time="2026-04-28T01:13:07.887209906Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 28 01:13:40.738787 kubelet[3069]: E0428 01:13:27.553338 3069 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.21:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=500\": dial tcp 10.0.0.21:6443: i/o timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 28 01:13:52.381584 containerd[1586]: time="2026-04-28T01:13:52.358839319Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 28 01:13:54.899594 kubelet[3069]: E0428 01:13:40.466487 3069 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:14:05.243313 containerd[1586]: time="2026-04-28T01:14:05.200973917Z" level=error msg="Failed to handle backOff event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686} for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:14:10.729417 kubelet[3069]: E0428 01:13:48.498372 3069 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:14:12.603320 kubelet[3069]: E0428 01:13:33.111210 3069 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 28 01:14:15.820155 kubelet[3069]: E0428 01:14:13.154474 3069 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 28 01:14:25.409082 containerd[1586]: time="2026-04-28T01:14:22.562270488Z" level=info msg="TaskExit event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808}" Apr 28 01:14:57.172399 kubelet[3069]: E0428 01:14:25.318200 3069 kubelet.go:3102] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:15:01.821016 kubelet[3069]: E0428 01:15:01.798973 3069 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:15:17.454828 containerd[1586]: time="2026-04-28T01:15:17.454521777Z" level=error msg="get state for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="context deadline exceeded: unknown" Apr 28 01:15:18.548177 containerd[1586]: time="2026-04-28T01:15:17.477181984Z" level=warning msg="unknown status" status=0 Apr 28 01:15:24.922348 kubelet[3069]: E0428 01:14:37.427635 3069 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},}" Apr 28 01:15:29.253889 containerd[1586]: time="2026-04-28T01:15:18.670026952Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 28 01:15:31.580653 containerd[1586]: time="2026-04-28T01:15:30.692951193Z" level=error msg="get state for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="context deadline exceeded: unknown" Apr 28 01:15:31.580653 containerd[1586]: time="2026-04-28T01:15:30.697931871Z" level=warning msg="unknown status" status=0 Apr 28 01:15:31.580653 containerd[1586]: time="2026-04-28T01:15:30.811905911Z" level=error msg="Failed to handle backOff event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808} for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:15:31.580653 containerd[1586]: time="2026-04-28T01:15:30.851055126Z" level=info msg="TaskExit event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686}" Apr 28 01:15:45.294902 kubelet[3069]: E0428 01:15:24.585640 3069 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Apr 28 01:15:50.946959 kubelet[3069]: E0428 01:15:50.919137 3069 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 28 01:15:50.946959 kubelet[3069]: E0428 01:15:50.935523 3069 kuberuntime_sandbox.go:294] "Failed to list pod sandboxes" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:15:50.946959 kubelet[3069]: E0428 01:15:50.936191 3069 generic.go:256] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:15:52.158831 kubelet[3069]: E0428 01:15:32.958912 3069 kuberuntime_container.go:540] "ListContainers failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:15:52.158831 kubelet[3069]: E0428 01:15:52.145800 3069 kubelet_pods.go:1203] "Error listing containers" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:15:52.158831 kubelet[3069]: E0428 01:15:52.146365 3069 kubelet.go:2623] "Failed cleaning pods" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:15:52.158831 kubelet[3069]: E0428 01:15:52.146382 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8m37.878s" Apr 28 01:15:52.158831 kubelet[3069]: E0428 01:15:52.146777 3069 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 8m40.556851803s ago; threshold is 3m0s]" Apr 28 01:15:56.207042 kubelet[3069]: E0428 01:15:55.365624 3069 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=994\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:15:57.784536 containerd[1586]: time="2026-04-28T01:15:51.687868883Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 28 01:16:00.926223 kubelet[3069]: E0428 01:15:52.506281 3069 log.go:32] "Version from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:16:04.182947 containerd[1586]: time="2026-04-28T01:16:04.178611970Z" level=error msg="Failed to handle backOff event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686} for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="failed to handle container TaskExit event: failed to stop container: unknown error after kill: runc did not terminate successfully: exit status 137: : unknown" Apr 28 01:16:12.807531 containerd[1586]: time="2026-04-28T01:15:49.894355553Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 28 01:16:14.717912 containerd[1586]: time="2026-04-28T01:16:12.788163228Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 28 01:16:15.524137 kubelet[3069]: I0428 01:15:19.325252 3069 image_gc_manager.go:222] "Failed to monitor images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:16:19.690989 kubelet[3069]: E0428 01:16:08.111975 3069 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 28 01:16:26.477389 kubelet[3069]: E0428 01:15:27.179175 3069 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Apr 28 01:16:39.155888 kubelet[3069]: E0428 01:16:24.365113 3069 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 8m50.519594496s ago; threshold is 3m0s]" Apr 28 01:16:41.505018 kubelet[3069]: E0428 01:16:23.569074 3069 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:16:46.122000 containerd[1586]: time="2026-04-28T01:16:42.512411208Z" level=info msg="TaskExit event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808}" Apr 28 01:16:52.147024 kubelet[3069]: E0428 01:14:10.896673 3069 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.21:6443/api/v1/namespaces/kube-system/events/coredns-674b8bbfcf-682j5.18aa5fc3e6402bab\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{coredns-674b8bbfcf-682j5.18aa5fc3e6402bab kube-system 943 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-674b8bbfcf-682j5,UID:5345e0a4-93aa-402f-8137-d129fbd0e8a0,APIVersion:v1,ResourceVersion:580,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://192.168.0.3:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:04:57 +0000 UTC,LastTimestamp:2026-04-28 01:07:31.265371026 +0000 UTC m=+350.988510764,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:16:59.692880 kubelet[3069]: E0428 01:16:48.178881 3069 kuberuntime_container.go:540] "ListContainers failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:17:01.616059 kubelet[3069]: E0428 01:16:54.389136 3069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="b10c9e366aefc5d4a5624ee6a24d12c61aaaf09f2b96538a77d3794d2d8f2433" Apr 28 01:17:03.403224 kubelet[3069]: E0428 01:17:02.467732 3069 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 28 01:17:06.358055 kubelet[3069]: E0428 01:16:43.568645 3069 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:17:11.064252 containerd[1586]: time="2026-04-28T01:17:11.061950601Z" level=error msg="ttrpc: received message on inactive stream" stream=69 Apr 28 01:17:12.571016 containerd[1586]: time="2026-04-28T01:17:11.062149159Z" level=error msg="get state for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="context deadline exceeded: unknown" Apr 28 01:17:15.065959 containerd[1586]: time="2026-04-28T01:17:11.574162108Z" level=warning msg="unknown status" status=0 Apr 28 01:17:17.063412 containerd[1586]: time="2026-04-28T01:17:16.629801256Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 28 01:17:18.809829 kubelet[3069]: I0428 01:16:57.073890 3069 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-28T01:16:32Z","lastTransitionTime":"2026-04-28T01:16:32Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 9m21.410993672s ago; threshold is 3m0s]"} Apr 28 01:17:21.107170 kubelet[3069]: E0428 01:17:05.649887 3069 container_log_manager.go:274] "Failed to get container status" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" worker=1 containerID="b10c9e366aefc5d4a5624ee6a24d12c61aaaf09f2b96538a77d3794d2d8f2433" Apr 28 01:17:23.184787 containerd[1586]: time="2026-04-28T01:17:16.079943528Z" level=error msg="get state for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="context deadline exceeded: unknown" Apr 28 01:17:23.184787 containerd[1586]: time="2026-04-28T01:17:23.177289470Z" level=warning msg="unknown status" status=0 Apr 28 01:17:29.537218 kubelet[3069]: E0428 01:17:03.604876 3069 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 28 01:17:29.537218 kubelet[3069]: E0428 01:17:26.647692 3069 kuberuntime_sandbox.go:294] "Failed to list pod sandboxes" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:17:38.634846 kubelet[3069]: E0428 01:16:51.302871 3069 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 9m32.802170682s ago; threshold is 3m0s]" Apr 28 01:17:46.983829 kubelet[3069]: E0428 01:17:29.351788 3069 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:17:49.379641 kubelet[3069]: I0428 01:17:31.371777 3069 image_gc_manager.go:230] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:17:53.593321 kubelet[3069]: E0428 01:17:46.764247 3069 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:17:58.292411 containerd[1586]: time="2026-04-28T01:17:48.987894360Z" level=error msg="Failed to handle backOff event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808} for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:18:06.750237 containerd[1586]: time="2026-04-28T01:17:50.969237877Z" level=error msg="ttrpc: received message on inactive stream" stream=73 Apr 28 01:18:19.585084 containerd[1586]: time="2026-04-28T01:18:16.687246605Z" level=info msg="TaskExit event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686}" Apr 28 01:18:30.055245 kubelet[3069]: E0428 01:18:16.115269 3069 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.0.0.21:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1026\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/kubelet/config/apiserver.go:66" type="*v1.Pod" Apr 28 01:18:34.539325 containerd[1586]: time="2026-04-28T01:18:33.882047790Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:18:36.099918 containerd[1586]: time="2026-04-28T01:18:34.893025232Z" level=warning msg="unknown status" status=0 Apr 28 01:18:36.099918 containerd[1586]: time="2026-04-28T01:18:34.505335913Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 28 01:18:39.802145 containerd[1586]: time="2026-04-28T01:18:39.793252671Z" level=error msg="get state for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="context deadline exceeded: unknown" Apr 28 01:18:39.802145 containerd[1586]: time="2026-04-28T01:18:39.795384334Z" level=warning msg="unknown status" status=0 Apr 28 01:18:42.500664 containerd[1586]: time="2026-04-28T01:18:39.968957129Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 28 01:18:46.457779 kubelet[3069]: E0428 01:18:42.252274 3069 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 10m44.117973324s ago; threshold is 3m0s]" Apr 28 01:18:57.797121 kubelet[3069]: E0428 01:18:32.384958 3069 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 28 01:19:00.050141 containerd[1586]: time="2026-04-28T01:18:59.500382992Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 28 01:19:01.157381 containerd[1586]: time="2026-04-28T01:19:00.046058548Z" level=error msg="Failed to handle backOff event container_id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" id:\"617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b\" pid:4412 exit_status:1 exited_at:{seconds:1777338469 nanos:680412686} for 617ed23e97204012a9bc64949fe03d042cd8d7fcecbff56f040baa2bf31d411b" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 28 01:19:04.133366 kubelet[3069]: E0428 01:19:03.976361 3069 kuberuntime_sandbox.go:294] "Failed to list pod sandboxes" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:19:15.379675 kubelet[3069]: E0428 01:19:00.225403 3069 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:19:16.285789 kubelet[3069]: E0428 01:19:15.372388 3069 kubelet.go:3102] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:19:21.877647 kubelet[3069]: E0428 01:19:17.174203 3069 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Apr 28 01:19:29.106219 kubelet[3069]: E0428 01:19:15.292470 3069 generic.go:256] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:19:38.097536 kubelet[3069]: E0428 01:19:10.342639 3069 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2\": dial tcp 10.0.0.21:6443: i/o timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:19:46.013384 kubelet[3069]: E0428 01:18:43.553935 3069 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=995\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:19:52.113118 kubelet[3069]: E0428 01:19:27.889276 3069 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 11m48.117499682s ago; threshold is 3m0s]" Apr 28 01:20:09.573344 kubelet[3069]: E0428 01:19:52.654169 3069 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 01:20:13.565170 containerd[1586]: time="2026-04-28T01:20:12.341297427Z" level=info msg="TaskExit event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808}" Apr 28 01:20:16.082138 kubelet[3069]: E0428 01:20:08.538134 3069 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 28 01:20:21.389965 containerd[1586]: time="2026-04-28T01:20:21.382247128Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 28 01:20:23.322397 containerd[1586]: time="2026-04-28T01:20:23.319233552Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 28 01:20:26.988928 containerd[1586]: time="2026-04-28T01:20:23.327240828Z" level=error msg="Failed to handle backOff event container_id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" id:\"bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9\" pid:4418 exit_status:1 exited_at:{seconds:1777338473 nanos:575377808} for bd2aca8167942e6396b73e1309bbe8e54ab7edd8f06f601863e80b87e6ee5cc9" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown"