Apr 21 04:03:29.109737 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 20 22:35:05 -00 2026 Apr 21 04:03:29.109773 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 04:03:29.109788 kernel: BIOS-provided physical RAM map: Apr 21 04:03:29.109796 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 21 04:03:29.109804 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 21 04:03:29.109812 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 04:03:29.109821 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 21 04:03:29.109829 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 21 04:03:29.109851 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 04:03:29.109859 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 04:03:29.109867 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 04:03:29.109893 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 04:03:29.109901 kernel: NX (Execute Disable) protection: active Apr 21 04:03:29.109910 kernel: APIC: Static calls initialized Apr 21 04:03:29.109920 kernel: SMBIOS 2.8 present. Apr 21 04:03:29.109929 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 21 04:03:29.109952 kernel: DMI: Memory slots populated: 1/1 Apr 21 04:03:29.109961 kernel: Hypervisor detected: KVM Apr 21 04:03:29.109970 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 21 04:03:29.109979 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 04:03:29.109987 kernel: kvm-clock: using sched offset of 10697667956 cycles Apr 21 04:03:29.109997 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 04:03:29.110006 kernel: tsc: Detected 2793.438 MHz processor Apr 21 04:03:29.110015 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 04:03:29.110025 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 04:03:29.110034 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 21 04:03:29.110045 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 04:03:29.110054 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 04:03:29.110063 kernel: Using GB pages for direct mapping Apr 21 04:03:29.110071 kernel: ACPI: Early table checksum verification disabled Apr 21 04:03:29.110081 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 21 04:03:29.110090 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 04:03:29.110099 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 04:03:29.110108 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 04:03:29.110116 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 21 04:03:29.110133 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 04:03:29.110142 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 04:03:29.110152 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 04:03:29.110162 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 04:03:29.110171 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 21 04:03:29.110186 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 21 04:03:29.110198 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 21 04:03:29.110207 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 21 04:03:29.110218 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 21 04:03:29.110227 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 21 04:03:29.110237 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 21 04:03:29.110248 kernel: No NUMA configuration found Apr 21 04:03:29.110257 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 21 04:03:29.110267 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 21 04:03:29.110279 kernel: Zone ranges: Apr 21 04:03:29.110289 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 04:03:29.110298 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 21 04:03:29.110308 kernel: Normal empty Apr 21 04:03:29.110318 kernel: Device empty Apr 21 04:03:29.110327 kernel: Movable zone start for each node Apr 21 04:03:29.110337 kernel: Early memory node ranges Apr 21 04:03:29.110346 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 04:03:29.110356 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 21 04:03:29.110380 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 21 04:03:29.110390 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 04:03:29.110400 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 04:03:29.110410 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 21 04:03:29.110430 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 04:03:29.110440 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 04:03:29.110450 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 04:03:29.110459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 04:03:29.110470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 04:03:29.110493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 04:03:29.110503 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 04:03:29.110513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 04:03:29.110523 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 04:03:29.110533 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 04:03:29.110542 kernel: TSC deadline timer available Apr 21 04:03:29.110552 kernel: CPU topo: Max. logical packages: 1 Apr 21 04:03:29.110561 kernel: CPU topo: Max. logical dies: 1 Apr 21 04:03:29.110570 kernel: CPU topo: Max. dies per package: 1 Apr 21 04:03:29.110580 kernel: CPU topo: Max. threads per core: 1 Apr 21 04:03:29.110597 kernel: CPU topo: Num. cores per package: 4 Apr 21 04:03:29.110606 kernel: CPU topo: Num. threads per package: 4 Apr 21 04:03:29.110616 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 21 04:03:29.110626 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 04:03:29.110636 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 04:03:29.110646 kernel: kvm-guest: setup PV sched yield Apr 21 04:03:29.110656 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 04:03:29.110665 kernel: Booting paravirtualized kernel on KVM Apr 21 04:03:29.110675 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 04:03:29.110688 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 04:03:29.110725 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 21 04:03:29.110735 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 21 04:03:29.110745 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 04:03:29.110754 kernel: kvm-guest: PV spinlocks enabled Apr 21 04:03:29.110765 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 04:03:29.110777 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 04:03:29.110787 kernel: random: crng init done Apr 21 04:03:29.110799 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 04:03:29.110809 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 04:03:29.110819 kernel: Fallback order for Node 0: 0 Apr 21 04:03:29.110829 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 21 04:03:29.110838 kernel: Policy zone: DMA32 Apr 21 04:03:29.110848 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 04:03:29.110858 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 04:03:29.110868 kernel: ftrace: allocating 40126 entries in 157 pages Apr 21 04:03:29.116391 kernel: ftrace: allocated 157 pages with 5 groups Apr 21 04:03:29.117325 kernel: Dynamic Preempt: voluntary Apr 21 04:03:29.117345 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 04:03:29.117364 kernel: rcu: RCU event tracing is enabled. Apr 21 04:03:29.117377 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 04:03:29.117386 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 04:03:29.117396 kernel: Rude variant of Tasks RCU enabled. Apr 21 04:03:29.117426 kernel: Tracing variant of Tasks RCU enabled. Apr 21 04:03:29.117440 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 04:03:29.117451 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 04:03:29.117468 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 04:03:29.117479 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 04:03:29.117490 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 04:03:29.117500 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 04:03:29.117509 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 04:03:29.117520 kernel: Console: colour VGA+ 80x25 Apr 21 04:03:29.117549 kernel: printk: legacy console [ttyS0] enabled Apr 21 04:03:29.117565 kernel: ACPI: Core revision 20240827 Apr 21 04:03:29.117576 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 04:03:29.117587 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 04:03:29.117601 kernel: x2apic enabled Apr 21 04:03:29.117611 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 04:03:29.117630 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 04:03:29.120024 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 04:03:29.120389 kernel: kvm-guest: setup PV IPIs Apr 21 04:03:29.120403 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 04:03:29.120416 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 04:03:29.120481 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 04:03:29.120492 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 04:03:29.120501 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 04:03:29.120516 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 04:03:29.120527 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 04:03:29.120537 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 04:03:29.120546 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 04:03:29.120557 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 04:03:29.120612 kernel: RETBleed: Vulnerable Apr 21 04:03:29.120621 kernel: Speculative Store Bypass: Vulnerable Apr 21 04:03:29.120630 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 04:03:29.120638 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 04:03:29.120654 kernel: active return thunk: its_return_thunk Apr 21 04:03:29.120662 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 04:03:29.120712 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 04:03:29.120723 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 04:03:29.120749 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 04:03:29.120763 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 04:03:29.120772 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 04:03:29.120782 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 04:03:29.120792 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 04:03:29.120802 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 04:03:29.120821 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 04:03:29.120851 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 04:03:29.120869 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 04:03:29.122024 kernel: Freeing SMP alternatives memory: 32K Apr 21 04:03:29.122063 kernel: pid_max: default: 32768 minimum: 301 Apr 21 04:03:29.122074 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 21 04:03:29.122085 kernel: landlock: Up and running. Apr 21 04:03:29.122096 kernel: SELinux: Initializing. Apr 21 04:03:29.122106 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 04:03:29.122117 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 04:03:29.122128 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 04:03:29.122138 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 04:03:29.122148 kernel: signal: max sigframe size: 3632 Apr 21 04:03:29.122163 kernel: rcu: Hierarchical SRCU implementation. Apr 21 04:03:29.122173 kernel: rcu: Max phase no-delay instances is 400. Apr 21 04:03:29.122182 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 21 04:03:29.122191 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 04:03:29.122199 kernel: smp: Bringing up secondary CPUs ... Apr 21 04:03:29.122208 kernel: smpboot: x86: Booting SMP configuration: Apr 21 04:03:29.122216 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 04:03:29.122224 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 04:03:29.122233 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 04:03:29.122336 kernel: Memory: 2419752K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46228K init, 2520K bss, 146108K reserved, 0K cma-reserved) Apr 21 04:03:29.122347 kernel: devtmpfs: initialized Apr 21 04:03:29.122357 kernel: x86/mm: Memory block size: 128MB Apr 21 04:03:29.122367 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 04:03:29.122377 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 04:03:29.122386 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 04:03:29.122395 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 04:03:29.122405 kernel: audit: initializing netlink subsys (disabled) Apr 21 04:03:29.122414 kernel: audit: type=2000 audit(1776744199.013:1): state=initialized audit_enabled=0 res=1 Apr 21 04:03:29.122427 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 04:03:29.122437 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 04:03:29.122445 kernel: cpuidle: using governor menu Apr 21 04:03:29.122454 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 04:03:29.122462 kernel: dca service started, version 1.12.1 Apr 21 04:03:29.122471 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 21 04:03:29.122480 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 04:03:29.122488 kernel: PCI: Using configuration type 1 for base access Apr 21 04:03:29.122497 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 04:03:29.122508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 04:03:29.122517 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 04:03:29.122525 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 04:03:29.122534 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 04:03:29.122543 kernel: ACPI: Added _OSI(Module Device) Apr 21 04:03:29.122553 kernel: ACPI: Added _OSI(Processor Device) Apr 21 04:03:29.122562 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 04:03:29.122571 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 04:03:29.122579 kernel: ACPI: Interpreter enabled Apr 21 04:03:29.122592 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 04:03:29.122601 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 04:03:29.122625 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 04:03:29.122634 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 04:03:29.122645 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 04:03:29.122654 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 04:03:29.123371 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 04:03:29.123479 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 04:03:29.123572 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 04:03:29.123585 kernel: PCI host bridge to bus 0000:00 Apr 21 04:03:29.126463 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 04:03:29.126580 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 04:03:29.126669 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 04:03:29.201993 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 04:03:29.204512 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 04:03:29.204644 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 21 04:03:29.204770 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 04:03:29.212014 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 21 04:03:29.212753 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 21 04:03:29.212905 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 21 04:03:29.213014 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 21 04:03:29.213120 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 21 04:03:29.213207 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 04:03:29.213337 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 21 04:03:29.213429 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 21 04:03:29.213522 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 21 04:03:29.213608 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 04:03:29.215912 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 21 04:03:29.216370 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 21 04:03:29.216468 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 21 04:03:29.218870 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 04:03:29.233739 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 21 04:03:29.237315 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 21 04:03:29.237612 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 21 04:03:29.237789 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 21 04:03:29.237901 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 21 04:03:29.238036 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 21 04:03:29.238131 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 04:03:29.238259 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 21 04:03:29.238362 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 21 04:03:29.238455 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 21 04:03:29.238601 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 21 04:03:29.238729 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 21 04:03:29.238747 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 04:03:29.238758 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 04:03:29.238772 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 04:03:29.238783 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 04:03:29.238793 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 04:03:29.238814 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 04:03:29.238834 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 04:03:29.238844 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 04:03:29.238855 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 04:03:29.238868 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 04:03:29.239103 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 04:03:29.239115 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 04:03:29.239124 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 04:03:29.239133 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 04:03:29.239141 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 04:03:29.239157 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 04:03:29.239167 kernel: iommu: Default domain type: Translated Apr 21 04:03:29.239175 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 04:03:29.239186 kernel: PCI: Using ACPI for IRQ routing Apr 21 04:03:29.239195 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 04:03:29.239204 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 21 04:03:29.239214 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 21 04:03:29.239329 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 04:03:29.239410 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 04:03:29.239499 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 04:03:29.239510 kernel: vgaarb: loaded Apr 21 04:03:29.239520 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 04:03:29.239529 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 04:03:29.239538 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 04:03:29.239548 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 04:03:29.239557 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 04:03:29.239566 kernel: pnp: PnP ACPI init Apr 21 04:03:29.239798 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 04:03:29.239826 kernel: pnp: PnP ACPI: found 6 devices Apr 21 04:03:29.239836 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 04:03:29.239846 kernel: NET: Registered PF_INET protocol family Apr 21 04:03:29.239857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 04:03:29.239867 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 04:03:29.239895 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 04:03:29.239905 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 04:03:29.239914 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 04:03:29.239929 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 04:03:29.239937 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 04:03:29.239947 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 04:03:29.239956 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 04:03:29.239966 kernel: NET: Registered PF_XDP protocol family Apr 21 04:03:29.240062 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 04:03:29.240144 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 04:03:29.240223 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 04:03:29.240302 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 04:03:29.240386 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 04:03:29.240468 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 21 04:03:29.240480 kernel: PCI: CLS 0 bytes, default 64 Apr 21 04:03:29.240491 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 04:03:29.240501 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 04:03:29.240512 kernel: Initialise system trusted keyrings Apr 21 04:03:29.240523 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 04:03:29.240533 kernel: Key type asymmetric registered Apr 21 04:03:29.240547 kernel: Asymmetric key parser 'x509' registered Apr 21 04:03:29.240557 kernel: hrtimer: interrupt took 4641027 ns Apr 21 04:03:29.240568 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 04:03:29.240579 kernel: io scheduler mq-deadline registered Apr 21 04:03:29.240682 kernel: io scheduler kyber registered Apr 21 04:03:29.240733 kernel: io scheduler bfq registered Apr 21 04:03:29.240743 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 04:03:29.240756 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 04:03:29.240767 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 04:03:29.240873 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 04:03:29.241972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 04:03:29.241984 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 04:03:29.241993 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 04:03:29.242003 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 04:03:29.242014 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 04:03:29.242178 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 04:03:29.242193 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 04:03:29.242280 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 04:03:29.242357 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T04:03:27 UTC (1776744207) Apr 21 04:03:29.242431 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 04:03:29.242442 kernel: intel_pstate: CPU model not supported Apr 21 04:03:29.242452 kernel: NET: Registered PF_INET6 protocol family Apr 21 04:03:29.242461 kernel: Segment Routing with IPv6 Apr 21 04:03:29.242471 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 04:03:29.242479 kernel: NET: Registered PF_PACKET protocol family Apr 21 04:03:29.242488 kernel: Key type dns_resolver registered Apr 21 04:03:29.242503 kernel: IPI shorthand broadcast: enabled Apr 21 04:03:29.242512 kernel: sched_clock: Marking stable (8126036429, 371732878)->(8932234253, -434464946) Apr 21 04:03:29.242521 kernel: registered taskstats version 1 Apr 21 04:03:29.242530 kernel: Loading compiled-in X.509 certificates Apr 21 04:03:29.242540 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: bc6d78cd9d700d9d34e2c2c5bd3cbf2a73898336' Apr 21 04:03:29.242549 kernel: Demotion targets for Node 0: null Apr 21 04:03:29.242558 kernel: Key type .fscrypt registered Apr 21 04:03:29.242566 kernel: Key type fscrypt-provisioning registered Apr 21 04:03:29.242575 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 04:03:29.242587 kernel: ima: Allocated hash algorithm: sha1 Apr 21 04:03:29.242595 kernel: ima: No architecture policies found Apr 21 04:03:29.242604 kernel: clk: Disabling unused clocks Apr 21 04:03:29.242612 kernel: Warning: unable to open an initial console. Apr 21 04:03:29.242621 kernel: Freeing unused kernel image (initmem) memory: 46228K Apr 21 04:03:29.242631 kernel: Write protecting the kernel read-only data: 40960k Apr 21 04:03:29.244570 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 21 04:03:29.244722 kernel: Run /init as init process Apr 21 04:03:29.244736 kernel: with arguments: Apr 21 04:03:29.244842 kernel: /init Apr 21 04:03:29.244851 kernel: with environment: Apr 21 04:03:29.244861 kernel: HOME=/ Apr 21 04:03:29.244870 kernel: TERM=linux Apr 21 04:03:29.248008 systemd[1]: Successfully made /usr/ read-only. Apr 21 04:03:29.248048 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 21 04:03:29.248125 systemd[1]: Detected virtualization kvm. Apr 21 04:03:29.248156 systemd[1]: Detected architecture x86-64. Apr 21 04:03:29.248172 systemd[1]: Running in initrd. Apr 21 04:03:29.248184 systemd[1]: No hostname configured, using default hostname. Apr 21 04:03:29.248194 systemd[1]: Hostname set to . Apr 21 04:03:29.248206 systemd[1]: Initializing machine ID from VM UUID. Apr 21 04:03:29.248216 systemd[1]: Queued start job for default target initrd.target. Apr 21 04:03:29.248226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 04:03:29.248240 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 04:03:29.248259 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 04:03:29.248271 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 04:03:29.248281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 04:03:29.248293 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 04:03:29.248304 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 04:03:29.248318 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 04:03:29.248328 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 04:03:29.248340 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 04:03:29.248350 systemd[1]: Reached target paths.target - Path Units. Apr 21 04:03:29.248361 systemd[1]: Reached target slices.target - Slice Units. Apr 21 04:03:29.248372 systemd[1]: Reached target swap.target - Swaps. Apr 21 04:03:29.248383 systemd[1]: Reached target timers.target - Timer Units. Apr 21 04:03:29.248392 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 04:03:29.248402 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 04:03:29.248415 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 04:03:29.248425 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 21 04:03:29.248435 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 04:03:29.248445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 04:03:29.248455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 04:03:29.248552 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 04:03:29.248562 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 04:03:29.248578 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 04:03:29.248587 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 04:03:29.248597 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 21 04:03:29.248608 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 04:03:29.248620 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 04:03:29.248631 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 04:03:29.248649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 04:03:29.248661 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 04:03:29.248673 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 04:03:29.248684 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 04:03:29.248941 systemd-journald[204]: Collecting audit messages is disabled. Apr 21 04:03:29.249064 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 04:03:29.249077 systemd-journald[204]: Journal started Apr 21 04:03:29.249105 systemd-journald[204]: Runtime Journal (/run/log/journal/69a7aa832da842be9c178015ece181aa) is 6M, max 48.2M, 42.2M free. Apr 21 04:03:29.199664 systemd-modules-load[205]: Inserted module 'overlay' Apr 21 04:03:29.267910 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 04:03:29.299280 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 04:03:29.526594 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 04:03:29.526642 kernel: Bridge firewalling registered Apr 21 04:03:29.357973 systemd-modules-load[205]: Inserted module 'br_netfilter' Apr 21 04:03:29.529309 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 04:03:29.535229 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 04:03:29.537827 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 04:03:29.549275 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 04:03:29.552961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 04:03:29.560770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 04:03:29.582412 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 21 04:03:29.592207 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 04:03:29.592789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 04:03:29.593072 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 04:03:29.604925 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 04:03:29.651393 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 04:03:29.661266 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 04:03:29.718728 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 04:03:29.728984 systemd-resolved[235]: Positive Trust Anchors: Apr 21 04:03:29.728999 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 04:03:29.729034 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 04:03:29.733004 systemd-resolved[235]: Defaulting to hostname 'linux'. Apr 21 04:03:29.736303 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 04:03:29.740350 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 04:03:30.596921 kernel: SCSI subsystem initialized Apr 21 04:03:30.644334 kernel: Loading iSCSI transport class v2.0-870. Apr 21 04:03:30.763523 kernel: iscsi: registered transport (tcp) Apr 21 04:03:30.877084 kernel: iscsi: registered transport (qla4xxx) Apr 21 04:03:30.878326 kernel: QLogic iSCSI HBA Driver Apr 21 04:03:31.219011 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 04:03:31.368839 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 04:03:31.398039 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 04:03:32.489418 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 04:03:32.574310 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 04:03:32.888278 kernel: raid6: avx512x4 gen() 22313 MB/s Apr 21 04:03:32.905074 kernel: raid6: avx512x2 gen() 33001 MB/s Apr 21 04:03:32.923855 kernel: raid6: avx512x1 gen() 24203 MB/s Apr 21 04:03:32.945809 kernel: raid6: avx2x4 gen() 18760 MB/s Apr 21 04:03:32.972017 kernel: raid6: avx2x2 gen() 8641 MB/s Apr 21 04:03:32.990656 kernel: raid6: avx2x1 gen() 10650 MB/s Apr 21 04:03:32.991888 kernel: raid6: using algorithm avx512x2 gen() 33001 MB/s Apr 21 04:03:33.009218 kernel: raid6: .... xor() 13872 MB/s, rmw enabled Apr 21 04:03:33.010207 kernel: raid6: using avx512x2 recovery algorithm Apr 21 04:03:33.116572 kernel: xor: automatically using best checksumming function avx Apr 21 04:03:33.611874 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 04:03:33.678075 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 04:03:33.696385 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 04:03:33.873143 systemd-udevd[453]: Using default interface naming scheme 'v255'. Apr 21 04:03:34.039325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 04:03:34.069895 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 04:03:34.261890 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Apr 21 04:03:34.499840 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 04:03:34.510558 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 04:03:35.068240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 04:03:35.085961 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 04:03:35.514171 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 04:03:35.522217 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 04:03:35.528575 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 04:03:35.541179 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 04:03:35.541965 kernel: GPT:9289727 != 19775487 Apr 21 04:03:35.541992 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 04:03:35.542740 kernel: GPT:9289727 != 19775487 Apr 21 04:03:35.543862 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 04:03:35.545435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 04:03:35.546619 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 04:03:35.547421 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 04:03:35.563785 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 04:03:35.568217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 04:03:35.573031 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 21 04:03:35.595751 kernel: libata version 3.00 loaded. Apr 21 04:03:35.623840 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 21 04:03:35.637885 kernel: AES CTR mode by8 optimization enabled Apr 21 04:03:35.669818 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 04:03:35.679914 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 04:03:35.680006 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 21 04:03:35.687394 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 21 04:03:35.714248 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 04:03:35.778754 kernel: scsi host0: ahci Apr 21 04:03:35.779412 kernel: scsi host1: ahci Apr 21 04:03:35.789897 kernel: scsi host2: ahci Apr 21 04:03:35.822100 kernel: scsi host3: ahci Apr 21 04:03:35.829283 kernel: scsi host4: ahci Apr 21 04:03:35.844789 kernel: scsi host5: ahci Apr 21 04:03:35.860551 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Apr 21 04:03:35.866505 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Apr 21 04:03:35.868120 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Apr 21 04:03:35.868140 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Apr 21 04:03:35.868265 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Apr 21 04:03:35.868279 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Apr 21 04:03:35.874944 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 04:03:35.993878 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 04:03:36.018951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 04:03:36.166492 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 04:03:36.167567 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 04:03:36.172332 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 04:03:36.178773 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 04:03:36.179193 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 04:03:36.179220 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 04:03:36.179177 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 04:03:36.209527 kernel: ata3.00: LPM support broken, forcing max_power Apr 21 04:03:36.210540 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 04:03:36.210777 kernel: ata3.00: applying bridge limits Apr 21 04:03:36.210796 kernel: ata3.00: LPM support broken, forcing max_power Apr 21 04:03:36.210808 kernel: ata3.00: configured for UDMA/100 Apr 21 04:03:36.210820 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 04:03:36.266247 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 04:03:36.285836 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 04:03:36.317985 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 04:03:36.484986 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 04:03:36.491718 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 04:03:36.491753 disk-uuid[646]: Primary Header is updated. Apr 21 04:03:36.491753 disk-uuid[646]: Secondary Entries is updated. Apr 21 04:03:36.491753 disk-uuid[646]: Secondary Header is updated. Apr 21 04:03:36.510871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 04:03:36.513761 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 04:03:37.416497 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 04:03:37.438547 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 04:03:37.458098 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 04:03:37.463035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 04:03:37.476168 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 04:03:37.626099 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 04:03:37.643359 disk-uuid[647]: The operation has completed successfully. Apr 21 04:03:37.726349 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 04:03:37.861265 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 04:03:37.861440 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 04:03:38.131620 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 04:03:38.287310 sh[675]: Success Apr 21 04:03:38.379096 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 04:03:38.383437 kernel: device-mapper: uevent: version 1.0.3 Apr 21 04:03:38.384336 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 21 04:03:38.468396 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 21 04:03:38.789080 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 04:03:38.820880 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 04:03:38.851262 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 04:03:38.897974 kernel: BTRFS: device fsid f0ffb5f7-32a8-4c02-8f56-14d7d8f0dab5 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (688) Apr 21 04:03:38.909384 kernel: BTRFS info (device dm-0): first mount of filesystem f0ffb5f7-32a8-4c02-8f56-14d7d8f0dab5 Apr 21 04:03:38.910516 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 04:03:38.944185 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 21 04:03:38.948233 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 21 04:03:38.961206 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 04:03:39.000431 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 21 04:03:39.014322 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 04:03:39.096405 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 04:03:39.114414 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 04:03:39.316049 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (723) Apr 21 04:03:39.316565 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 04:03:39.322288 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 04:03:39.409993 kernel: BTRFS info (device vda6): turning on async discard Apr 21 04:03:39.413190 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 04:03:39.483292 kernel: BTRFS info (device vda6): last unmount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 04:03:39.510056 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 04:03:39.537052 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 04:03:40.288804 ignition[776]: Ignition 2.22.0 Apr 21 04:03:40.288866 ignition[776]: Stage: fetch-offline Apr 21 04:03:40.289464 ignition[776]: no configs at "/usr/lib/ignition/base.d" Apr 21 04:03:40.289483 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 04:03:40.290502 ignition[776]: parsed url from cmdline: "" Apr 21 04:03:40.290508 ignition[776]: no config URL provided Apr 21 04:03:40.290516 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 04:03:40.290528 ignition[776]: no config at "/usr/lib/ignition/user.ign" Apr 21 04:03:40.294271 ignition[776]: op(1): [started] loading QEMU firmware config module Apr 21 04:03:40.294280 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 04:03:40.357558 ignition[776]: op(1): [finished] loading QEMU firmware config module Apr 21 04:03:40.407354 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 04:03:40.485944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 04:03:40.619011 systemd-networkd[866]: lo: Link UP Apr 21 04:03:40.619037 systemd-networkd[866]: lo: Gained carrier Apr 21 04:03:40.619918 ignition[776]: parsing config with SHA512: 7b86c52d88d13de112458ef5612a18b4b315678c895a69a455770649bc6c5d88aa819cd24fbef905b5dc0acf625888063ac8ef864e09d03f85cd069a8a8a6343 Apr 21 04:03:40.625873 systemd-networkd[866]: Enumeration completed Apr 21 04:03:40.627434 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 04:03:40.628180 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 04:03:40.628190 systemd-networkd[866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 04:03:40.637470 ignition[776]: fetch-offline: fetch-offline passed Apr 21 04:03:40.632780 systemd[1]: Reached target network.target - Network. Apr 21 04:03:40.637573 ignition[776]: Ignition finished successfully Apr 21 04:03:40.633612 systemd-networkd[866]: eth0: Link UP Apr 21 04:03:40.634958 systemd-networkd[866]: eth0: Gained carrier Apr 21 04:03:40.634999 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 04:03:40.636786 unknown[776]: fetched base config from "system" Apr 21 04:03:40.636862 unknown[776]: fetched user config from "qemu" Apr 21 04:03:40.655938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 04:03:40.676661 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 04:03:40.680683 systemd-networkd[866]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 04:03:40.688886 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 04:03:40.982940 ignition[871]: Ignition 2.22.0 Apr 21 04:03:40.982996 ignition[871]: Stage: kargs Apr 21 04:03:40.983562 ignition[871]: no configs at "/usr/lib/ignition/base.d" Apr 21 04:03:40.983582 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 04:03:40.986017 ignition[871]: kargs: kargs passed Apr 21 04:03:41.002171 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 04:03:40.986198 ignition[871]: Ignition finished successfully Apr 21 04:03:41.014798 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 04:03:41.216946 ignition[879]: Ignition 2.22.0 Apr 21 04:03:41.218407 ignition[879]: Stage: disks Apr 21 04:03:41.246358 ignition[879]: no configs at "/usr/lib/ignition/base.d" Apr 21 04:03:41.267633 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 04:03:41.315265 ignition[879]: disks: disks passed Apr 21 04:03:41.319422 ignition[879]: Ignition finished successfully Apr 21 04:03:41.373667 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 04:03:41.396714 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 04:03:41.406717 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 04:03:41.426324 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 04:03:41.438576 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 04:03:41.448443 systemd[1]: Reached target basic.target - Basic System. Apr 21 04:03:41.482208 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 04:03:41.806078 systemd-fsck[889]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 21 04:03:41.909439 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 04:03:41.937745 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 04:03:42.605548 systemd-networkd[866]: eth0: Gained IPv6LL Apr 21 04:03:42.664282 kernel: EXT4-fs (vda9): mounted filesystem 146ef5ea-4935-456e-a7a6-cf0210fee567 r/w with ordered data mode. Quota mode: none. Apr 21 04:03:42.686449 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 04:03:42.698102 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 04:03:42.723610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 04:03:42.749194 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 04:03:42.752926 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 04:03:42.765768 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 04:03:42.766210 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 04:03:42.812945 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (897) Apr 21 04:03:42.816061 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 04:03:42.828121 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 04:03:42.828228 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 04:03:42.844920 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 04:03:42.866385 kernel: BTRFS info (device vda6): turning on async discard Apr 21 04:03:42.866462 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 04:03:42.870738 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 04:03:43.486304 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 04:03:43.563640 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Apr 21 04:03:43.610378 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 04:03:43.651066 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 04:03:45.706741 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 04:03:45.725319 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 04:03:45.751671 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 04:03:45.820926 kernel: BTRFS info (device vda6): last unmount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 04:03:45.848053 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 04:03:46.032579 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 04:03:46.302100 ignition[1010]: INFO : Ignition 2.22.0 Apr 21 04:03:46.302100 ignition[1010]: INFO : Stage: mount Apr 21 04:03:46.316243 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 04:03:46.316243 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 04:03:46.324347 ignition[1010]: INFO : mount: mount passed Apr 21 04:03:46.326210 ignition[1010]: INFO : Ignition finished successfully Apr 21 04:03:46.373222 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 04:03:46.399139 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 04:03:46.566061 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 04:03:46.767013 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1025) Apr 21 04:03:46.781817 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 04:03:46.782824 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 04:03:46.886908 kernel: BTRFS info (device vda6): turning on async discard Apr 21 04:03:46.894829 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 04:03:46.939847 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 04:03:47.445559 ignition[1042]: INFO : Ignition 2.22.0 Apr 21 04:03:47.445559 ignition[1042]: INFO : Stage: files Apr 21 04:03:47.474193 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 04:03:47.474193 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 04:03:47.491553 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Apr 21 04:03:47.564250 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 04:03:47.581565 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 04:03:47.673325 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 04:03:47.693348 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 04:03:47.707593 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 04:03:47.706583 unknown[1042]: wrote ssh authorized keys file for user: core Apr 21 04:03:47.857454 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 04:03:47.876504 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 04:03:47.995899 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 04:03:48.343018 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 04:03:48.343018 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 04:03:48.358223 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 21 04:03:48.689229 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 04:03:49.470394 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 04:03:49.483229 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 04:03:49.497301 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 04:03:49.497301 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 04:03:49.497301 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 04:03:49.497301 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 04:03:49.529618 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 04:03:49.529618 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 04:03:49.529618 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 04:03:49.529618 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 04:03:49.529618 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 04:03:49.529618 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 04:03:49.529618 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 04:03:49.529618 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 04:03:49.529618 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 04:03:49.833505 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 04:03:53.780218 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 04:03:53.792573 ignition[1042]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 21 04:03:53.805179 ignition[1042]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 04:03:53.833544 ignition[1042]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 04:03:53.850210 ignition[1042]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 21 04:03:53.859326 ignition[1042]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 21 04:03:53.859326 ignition[1042]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 04:03:53.887834 ignition[1042]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 04:03:53.887834 ignition[1042]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 21 04:03:53.887834 ignition[1042]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 04:03:54.066512 ignition[1042]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 04:03:54.115615 ignition[1042]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 04:03:54.205064 ignition[1042]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 04:03:54.214454 ignition[1042]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 04:03:54.214454 ignition[1042]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 04:03:54.245440 ignition[1042]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 04:03:54.245440 ignition[1042]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 04:03:54.245440 ignition[1042]: INFO : files: files passed Apr 21 04:03:54.245440 ignition[1042]: INFO : Ignition finished successfully Apr 21 04:03:54.266572 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 04:03:54.289159 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 04:03:54.310850 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 04:03:54.398002 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 04:03:54.400324 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 04:03:54.416413 initrd-setup-root-after-ignition[1072]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 04:03:54.434724 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 04:03:54.434724 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 04:03:54.453635 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 04:03:54.453791 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 04:03:54.457984 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 04:03:54.474188 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 04:03:54.969361 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 04:03:54.969949 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 04:03:55.031052 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 04:03:55.055531 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 04:03:55.070611 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 04:03:55.118003 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 04:03:55.305972 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 04:03:55.359225 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 04:03:55.550653 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 04:03:55.563622 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 04:03:55.577201 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 04:03:55.578397 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 04:03:55.579056 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 04:03:55.607294 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 04:03:55.607886 systemd[1]: Stopped target basic.target - Basic System. Apr 21 04:03:55.676481 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 04:03:55.702857 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 04:03:55.727434 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 04:03:55.742637 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 21 04:03:55.752558 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 04:03:55.764378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 04:03:55.771430 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 04:03:55.797594 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 04:03:55.805818 systemd[1]: Stopped target swap.target - Swaps. Apr 21 04:03:55.814556 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 04:03:55.817112 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 04:03:55.830153 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 04:03:55.838450 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 04:03:55.844630 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 04:03:55.846843 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 04:03:55.853015 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 04:03:55.853746 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 04:03:55.865794 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 04:03:55.870219 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 04:03:55.883085 systemd[1]: Stopped target paths.target - Path Units. Apr 21 04:03:55.893385 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 04:03:55.897206 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 04:03:55.899479 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 04:03:55.914187 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 04:03:55.919885 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 04:03:55.922777 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 04:03:55.934114 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 04:03:55.937372 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 04:03:55.947563 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 04:03:55.950514 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 04:03:55.964511 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 04:03:55.965248 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 04:03:55.995132 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 04:03:56.024183 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 04:03:56.083008 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 04:03:56.093646 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 04:03:56.123652 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 04:03:56.124249 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 04:03:56.383542 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 04:03:56.554978 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 04:03:56.704393 ignition[1098]: INFO : Ignition 2.22.0 Apr 21 04:03:56.704393 ignition[1098]: INFO : Stage: umount Apr 21 04:03:56.769043 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 04:03:56.769043 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 04:03:56.790790 ignition[1098]: INFO : umount: umount passed Apr 21 04:03:56.790790 ignition[1098]: INFO : Ignition finished successfully Apr 21 04:03:56.773457 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 04:03:56.792638 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 04:03:56.793240 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 04:03:56.834048 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 04:03:56.840743 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 04:03:56.862577 systemd[1]: Stopped target network.target - Network. Apr 21 04:03:56.867974 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 04:03:56.872183 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 04:03:56.883943 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 04:03:56.886237 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 04:03:56.899611 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 04:03:56.901212 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 04:03:56.905372 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 04:03:56.906333 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 04:03:56.916860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 04:03:56.919329 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 04:03:56.925026 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 04:03:56.931853 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 04:03:57.039623 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 04:03:57.040011 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 04:03:57.081165 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 21 04:03:57.082251 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 04:03:57.082664 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 04:03:57.130956 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 21 04:03:57.152194 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 21 04:03:57.157621 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 04:03:57.158557 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 04:03:57.176466 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 04:03:57.179141 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 04:03:57.179387 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 04:03:57.187498 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 04:03:57.187679 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 04:03:57.202401 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 04:03:57.204793 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 04:03:57.225672 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 04:03:57.231763 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 04:03:57.246178 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 04:03:57.257608 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 21 04:03:57.257766 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 21 04:03:57.298085 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 04:03:57.309617 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 04:03:57.320523 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 04:03:57.320780 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 04:03:57.329924 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 04:03:57.330257 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 04:03:57.338016 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 04:03:57.341332 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 04:03:57.357478 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 04:03:57.357939 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 04:03:57.369975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 04:03:57.372441 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 04:03:57.391229 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 04:03:57.397928 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 21 04:03:57.398198 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 04:03:57.422023 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 04:03:57.429424 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 04:03:57.450526 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 04:03:57.451160 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 04:03:57.469035 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 04:03:57.469390 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 04:03:57.478507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 04:03:57.483497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 04:03:57.503789 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 21 04:03:57.503906 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 21 04:03:57.503949 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 21 04:03:57.503995 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 21 04:03:57.507856 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 04:03:57.508539 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 04:03:57.517169 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 04:03:57.517283 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 04:03:57.534525 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 04:03:57.545866 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 04:03:57.614544 systemd[1]: Switching root. Apr 21 04:03:57.821061 systemd-journald[204]: Journal stopped Apr 21 04:04:06.670811 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Apr 21 04:04:06.673635 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 04:04:06.675139 kernel: SELinux: policy capability open_perms=1 Apr 21 04:04:06.675372 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 04:04:06.675389 kernel: SELinux: policy capability always_check_network=0 Apr 21 04:04:06.675451 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 04:04:06.675467 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 04:04:06.675480 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 04:04:06.675503 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 04:04:06.675522 kernel: SELinux: policy capability userspace_initial_context=0 Apr 21 04:04:06.675657 kernel: audit: type=1403 audit(1776744238.674:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 04:04:06.675684 systemd[1]: Successfully loaded SELinux policy in 188.480ms. Apr 21 04:04:06.675806 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 59.008ms. Apr 21 04:04:06.675827 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 21 04:04:06.675842 systemd[1]: Detected virtualization kvm. Apr 21 04:04:06.675929 systemd[1]: Detected architecture x86-64. Apr 21 04:04:06.675944 systemd[1]: Detected first boot. Apr 21 04:04:06.675964 systemd[1]: Initializing machine ID from VM UUID. Apr 21 04:04:06.675980 zram_generator::config[1146]: No configuration found. Apr 21 04:04:06.676079 kernel: Guest personality initialized and is inactive Apr 21 04:04:06.676094 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 21 04:04:06.676106 kernel: Initialized host personality Apr 21 04:04:06.676118 kernel: NET: Registered PF_VSOCK protocol family Apr 21 04:04:06.676131 systemd[1]: Populated /etc with preset unit settings. Apr 21 04:04:06.676159 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 21 04:04:06.677035 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 04:04:06.679392 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 04:04:06.680500 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 04:04:06.680999 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 04:04:06.681022 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 04:04:06.681117 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 04:04:06.681133 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 04:04:06.681148 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 04:04:06.681164 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 04:04:06.681986 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 04:04:06.682767 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 04:04:06.683086 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 04:04:06.683105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 04:04:06.683120 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 04:04:06.683135 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 04:04:06.683151 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 04:04:06.683215 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 04:04:06.683230 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 04:04:06.683242 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 04:04:06.683254 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 04:04:06.683269 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 04:04:06.683283 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 04:04:06.683308 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 04:04:06.683321 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 04:04:06.683333 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 04:04:06.683430 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 04:04:06.683446 systemd[1]: Reached target slices.target - Slice Units. Apr 21 04:04:06.683460 systemd[1]: Reached target swap.target - Swaps. Apr 21 04:04:06.683473 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 04:04:06.683488 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 04:04:06.683501 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 21 04:04:06.683515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 04:04:06.683531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 04:04:06.683545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 04:04:06.683559 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 04:04:06.683576 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 04:04:06.683591 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 04:04:06.683605 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 04:04:06.683622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 04:04:06.683641 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 04:04:06.683655 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 04:04:06.683670 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 04:04:06.683686 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 04:04:06.683759 systemd[1]: Reached target machines.target - Containers. Apr 21 04:04:06.683774 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 04:04:06.683790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 04:04:06.684682 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 04:04:06.685593 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 04:04:06.687294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 04:04:06.688517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 04:04:06.689618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 04:04:06.689804 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 04:04:06.689840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 04:04:06.689857 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 04:04:06.689872 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 04:04:06.689887 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 04:04:06.689902 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 04:04:06.689917 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 04:04:06.689934 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 04:04:06.689951 kernel: ACPI: bus type drm_connector registered Apr 21 04:04:06.689973 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 04:04:06.689986 kernel: fuse: init (API version 7.41) Apr 21 04:04:06.690001 kernel: loop: module loaded Apr 21 04:04:06.690015 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 04:04:06.690065 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 04:04:06.690080 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 04:04:06.690095 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 21 04:04:06.690109 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 04:04:06.690127 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 04:04:06.690156 systemd[1]: Stopped verity-setup.service. Apr 21 04:04:06.690562 systemd-journald[1224]: Collecting audit messages is disabled. Apr 21 04:04:06.690609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 04:04:06.690626 systemd-journald[1224]: Journal started Apr 21 04:04:06.690658 systemd-journald[1224]: Runtime Journal (/run/log/journal/69a7aa832da842be9c178015ece181aa) is 6M, max 48.2M, 42.2M free. Apr 21 04:04:04.314103 systemd[1]: Queued start job for default target multi-user.target. Apr 21 04:04:04.440539 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 04:04:04.452954 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 04:04:04.462110 systemd[1]: systemd-journald.service: Consumed 1.930s CPU time. Apr 21 04:04:06.718961 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 04:04:06.749157 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 04:04:06.771683 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 04:04:06.857686 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 04:04:06.877154 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 04:04:06.900842 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 04:04:06.919519 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 04:04:06.955658 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 04:04:06.994104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 04:04:07.018886 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 04:04:07.055211 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 04:04:07.078333 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 04:04:07.084244 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 04:04:07.097678 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 04:04:07.102157 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 04:04:07.112958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 04:04:07.118169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 04:04:07.146620 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 04:04:07.151613 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 04:04:07.203133 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 04:04:07.219875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 04:04:07.277678 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 04:04:07.286771 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 04:04:07.296345 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 04:04:07.299886 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 21 04:04:07.337669 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 04:04:07.349275 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 04:04:07.381837 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 04:04:07.399688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 04:04:07.399998 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 04:04:07.418098 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 21 04:04:07.441352 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 04:04:07.453975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 04:04:07.463145 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 04:04:07.516290 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 04:04:07.576905 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 04:04:07.619337 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 04:04:07.626086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 04:04:07.633221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 04:04:07.663797 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 04:04:07.696212 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 04:04:07.699670 systemd-journald[1224]: Time spent on flushing to /var/log/journal/69a7aa832da842be9c178015ece181aa is 97.119ms for 991 entries. Apr 21 04:04:07.699670 systemd-journald[1224]: System Journal (/var/log/journal/69a7aa832da842be9c178015ece181aa) is 8M, max 195.6M, 187.6M free. Apr 21 04:04:07.906675 systemd-journald[1224]: Received client request to flush runtime journal. Apr 21 04:04:07.800734 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 04:04:07.818899 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 04:04:07.862110 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 04:04:07.924684 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 04:04:07.933130 kernel: loop0: detected capacity change from 0 to 110984 Apr 21 04:04:07.976273 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 04:04:07.982976 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 04:04:08.001275 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 21 04:04:08.008259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 04:04:08.122643 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 04:04:08.170364 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 21 04:04:08.170414 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 21 04:04:08.310739 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 04:04:08.342841 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 04:04:08.353592 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 04:04:08.356226 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 21 04:04:08.376833 kernel: loop1: detected capacity change from 0 to 228704 Apr 21 04:04:08.485877 kernel: loop2: detected capacity change from 0 to 128560 Apr 21 04:04:08.560280 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 04:04:08.569031 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 04:04:08.612314 kernel: loop3: detected capacity change from 0 to 110984 Apr 21 04:04:08.715148 kernel: loop4: detected capacity change from 0 to 228704 Apr 21 04:04:08.835031 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Apr 21 04:04:08.835060 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Apr 21 04:04:08.861994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 04:04:08.872469 kernel: loop5: detected capacity change from 0 to 128560 Apr 21 04:04:08.931945 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 04:04:08.933482 (sd-merge)[1291]: Merged extensions into '/usr'. Apr 21 04:04:08.963302 systemd[1]: Reload requested from client PID 1265 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 04:04:08.964496 systemd[1]: Reloading... Apr 21 04:04:09.514018 zram_generator::config[1315]: No configuration found. Apr 21 04:04:10.922012 ldconfig[1260]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 04:04:11.538590 systemd[1]: Reloading finished in 2571 ms. Apr 21 04:04:11.718931 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 04:04:11.737810 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 04:04:11.763994 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 04:04:11.970316 systemd[1]: Starting ensure-sysext.service... Apr 21 04:04:12.007745 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 04:04:12.075624 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 04:04:12.203127 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 21 04:04:12.203879 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 21 04:04:12.205653 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 04:04:12.206146 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 04:04:12.206592 systemd[1]: Reload requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Apr 21 04:04:12.206610 systemd[1]: Reloading... Apr 21 04:04:12.216391 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 04:04:12.216869 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Apr 21 04:04:12.216933 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Apr 21 04:04:12.289932 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 04:04:12.289964 systemd-tmpfiles[1359]: Skipping /boot Apr 21 04:04:12.426047 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 04:04:12.426885 systemd-tmpfiles[1359]: Skipping /boot Apr 21 04:04:12.500159 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Apr 21 04:04:12.675924 zram_generator::config[1384]: No configuration found. Apr 21 04:04:13.670252 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 04:04:13.686358 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 04:04:13.692676 kernel: ACPI: button: Power Button [PWRF] Apr 21 04:04:13.728818 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 04:04:13.735541 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 04:04:13.738091 systemd[1]: Reloading finished in 1530 ms. Apr 21 04:04:13.782513 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 04:04:13.794124 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 04:04:13.888522 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 04:04:14.009775 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 04:04:14.238886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 04:04:14.252448 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 21 04:04:14.293355 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 04:04:14.299158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 04:04:14.304116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 04:04:14.325255 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 04:04:14.345108 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 04:04:14.348088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 04:04:14.377959 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 04:04:14.388116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 04:04:14.423947 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 04:04:14.473197 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 04:04:14.497916 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 04:04:14.578548 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 04:04:14.584116 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 04:04:14.603086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 04:04:14.617006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 04:04:14.626107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 04:04:14.638597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 04:04:14.650302 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 04:04:14.818171 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 04:04:14.831147 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 04:04:14.886869 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 04:04:14.893660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 04:04:14.947143 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 04:04:14.991276 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 04:04:15.092502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 04:04:15.163753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 04:04:15.188322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 04:04:15.193415 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 04:04:15.259793 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 04:04:15.271172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 04:04:15.412945 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 04:04:15.432155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 04:04:15.436074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 04:04:15.487795 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 04:04:15.494139 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 04:04:15.677983 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 04:04:15.680970 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 04:04:15.697213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 04:04:15.697463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 04:04:15.764062 systemd[1]: Finished ensure-sysext.service. Apr 21 04:04:15.950206 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 04:04:15.966133 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 04:04:15.986542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 04:04:15.986947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 04:04:16.008367 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 04:04:16.029279 augenrules[1524]: No rules Apr 21 04:04:16.051133 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 04:04:16.095746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 04:04:16.103843 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 04:04:16.111176 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 04:04:16.115941 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 21 04:04:16.276279 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 04:04:16.592080 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 04:04:17.640366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 04:04:18.208077 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 04:04:18.216570 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 04:04:18.259099 systemd-resolved[1484]: Positive Trust Anchors: Apr 21 04:04:18.259440 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 04:04:18.259535 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 04:04:18.303043 systemd-resolved[1484]: Defaulting to hostname 'linux'. Apr 21 04:04:18.308010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 04:04:18.313912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 04:04:18.321665 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 04:04:18.379939 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 04:04:18.399085 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 04:04:18.407569 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 21 04:04:18.422674 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 04:04:18.447128 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 04:04:18.461207 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 04:04:18.468452 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 04:04:18.469654 systemd[1]: Reached target paths.target - Path Units. Apr 21 04:04:18.483595 systemd[1]: Reached target timers.target - Timer Units. Apr 21 04:04:18.505281 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 04:04:18.544605 systemd-networkd[1483]: lo: Link UP Apr 21 04:04:18.544676 systemd-networkd[1483]: lo: Gained carrier Apr 21 04:04:18.547979 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 04:04:18.550142 systemd-networkd[1483]: Enumeration completed Apr 21 04:04:18.576417 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 21 04:04:18.586948 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 04:04:18.586964 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 04:04:18.593421 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 21 04:04:18.605831 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 21 04:04:18.619323 systemd-networkd[1483]: eth0: Link UP Apr 21 04:04:18.620205 systemd-networkd[1483]: eth0: Gained carrier Apr 21 04:04:18.621478 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 04:04:18.744569 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 04:04:18.757444 systemd-networkd[1483]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 04:04:18.762762 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 21 04:04:18.772273 systemd-timesyncd[1525]: Network configuration changed, trying to establish connection. Apr 21 04:04:18.781886 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 04:04:18.792194 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 04:04:18.792888 systemd-timesyncd[1525]: Initial clock synchronization to Tue 2026-04-21 04:04:18.951476 UTC. Apr 21 04:04:18.793536 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 04:04:18.799977 systemd[1]: Reached target network.target - Network. Apr 21 04:04:18.801754 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 04:04:18.807249 systemd[1]: Reached target basic.target - Basic System. Apr 21 04:04:18.815071 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 04:04:18.815673 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 04:04:18.921883 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 04:04:18.938423 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 04:04:18.956237 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 04:04:19.008984 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 04:04:19.032346 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 04:04:19.038503 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 04:04:19.106518 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 21 04:04:19.193228 jq[1554]: false Apr 21 04:04:19.208924 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 04:04:19.248523 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 04:04:19.261648 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 04:04:19.271449 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing passwd entry cache Apr 21 04:04:19.270317 oslogin_cache_refresh[1556]: Refreshing passwd entry cache Apr 21 04:04:19.276270 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 04:04:19.293447 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting users, quitting Apr 21 04:04:19.293261 oslogin_cache_refresh[1556]: Failure getting users, quitting Apr 21 04:04:19.313026 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 21 04:04:19.313026 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing group entry cache Apr 21 04:04:19.294544 oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 21 04:04:19.298236 oslogin_cache_refresh[1556]: Refreshing group entry cache Apr 21 04:04:19.325378 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 04:04:19.385546 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting groups, quitting Apr 21 04:04:19.385546 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 21 04:04:19.326750 oslogin_cache_refresh[1556]: Failure getting groups, quitting Apr 21 04:04:19.326933 oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 21 04:04:19.430263 extend-filesystems[1555]: Found /dev/vda6 Apr 21 04:04:19.436529 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 21 04:04:19.465082 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 04:04:19.488610 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 04:04:19.496993 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 04:04:19.508655 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 04:04:19.532910 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 04:04:19.561835 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 04:04:19.582317 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 04:04:19.592488 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 04:04:19.599669 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 21 04:04:19.605647 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 21 04:04:19.691466 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 04:04:19.697246 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 04:04:19.743825 extend-filesystems[1555]: Found /dev/vda9 Apr 21 04:04:19.760083 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 04:04:19.771090 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 04:04:19.778553 jq[1575]: true Apr 21 04:04:19.800182 extend-filesystems[1555]: Checking size of /dev/vda9 Apr 21 04:04:19.907010 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 04:04:19.920684 systemd-networkd[1483]: eth0: Gained IPv6LL Apr 21 04:04:20.017299 jq[1584]: true Apr 21 04:04:20.031392 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 04:04:20.347586 update_engine[1573]: I20260421 04:04:20.328265 1573 main.cc:92] Flatcar Update Engine starting Apr 21 04:04:20.428640 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 04:04:20.441060 extend-filesystems[1555]: Resized partition /dev/vda9 Apr 21 04:04:20.493325 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 04:04:20.519097 tar[1579]: linux-amd64/LICENSE Apr 21 04:04:20.500974 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 04:04:20.535021 tar[1579]: linux-amd64/helm Apr 21 04:04:20.572512 extend-filesystems[1605]: resize2fs 1.47.3 (8-Jul-2025) Apr 21 04:04:20.559432 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 04:04:20.716506 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 04:04:20.611572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:04:20.713576 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 04:04:20.944510 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 04:04:21.009180 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 04:04:21.105488 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:45662.service - OpenSSH per-connection server daemon (10.0.0.1:45662). Apr 21 04:04:21.127966 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 21 04:04:21.505867 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 04:04:21.506410 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 04:04:21.507453 systemd-logind[1564]: Watching system buttons on /dev/input/event2 (Power Button) Apr 21 04:04:21.507474 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 04:04:21.533327 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 04:04:21.535669 systemd-logind[1564]: New seat seat0. Apr 21 04:04:21.660391 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 04:04:21.683217 extend-filesystems[1605]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 04:04:21.683217 extend-filesystems[1605]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 04:04:21.683217 extend-filesystems[1605]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 04:04:22.892922 bash[1646]: Updated "/home/core/.ssh/authorized_keys" Apr 21 04:04:21.921405 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 04:04:22.264381 dbus-daemon[1552]: [system] SELinux support is enabled Apr 21 04:04:22.985561 update_engine[1573]: I20260421 04:04:22.711356 1573 update_check_scheduler.cc:74] Next update check in 7m14s Apr 21 04:04:22.985993 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Apr 21 04:04:21.941962 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 04:04:22.890493 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 04:04:22.924921 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 04:04:22.962005 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 04:04:23.049972 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 04:04:23.071519 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 04:04:23.082460 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 04:04:23.557501 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 04:04:23.632873 dbus-daemon[1552]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 04:04:23.704218 systemd[1]: Started update-engine.service - Update Engine. Apr 21 04:04:23.780043 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 04:04:23.788914 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 04:04:23.869014 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 04:04:23.886347 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 04:04:23.892425 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 04:04:23.894450 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 04:04:23.897784 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 04:04:23.904498 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 45662 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:04:23.905566 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 04:04:23.906037 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 04:04:23.930848 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 04:04:23.962206 containerd[1580]: time="2026-04-21T04:04:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 21 04:04:23.962113 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:04:24.006487 containerd[1580]: time="2026-04-21T04:04:24.005238100Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 21 04:04:24.381231 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 04:04:24.481221 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 04:04:24.556531 containerd[1580]: time="2026-04-21T04:04:24.550630627Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="179.029µs" Apr 21 04:04:24.556531 containerd[1580]: time="2026-04-21T04:04:24.551667738Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 21 04:04:24.556531 containerd[1580]: time="2026-04-21T04:04:24.551892339Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 21 04:04:24.556531 containerd[1580]: time="2026-04-21T04:04:24.554353843Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 21 04:04:24.556531 containerd[1580]: time="2026-04-21T04:04:24.554440525Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 21 04:04:24.556531 containerd[1580]: time="2026-04-21T04:04:24.555370966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 21 04:04:24.564622 containerd[1580]: time="2026-04-21T04:04:24.558039238Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 21 04:04:24.564622 containerd[1580]: time="2026-04-21T04:04:24.558257030Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 21 04:04:24.564622 containerd[1580]: time="2026-04-21T04:04:24.564924100Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 21 04:04:24.564622 containerd[1580]: time="2026-04-21T04:04:24.565232888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 21 04:04:24.564622 containerd[1580]: time="2026-04-21T04:04:24.565279298Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 21 04:04:24.564622 containerd[1580]: time="2026-04-21T04:04:24.565299831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 21 04:04:24.570065 containerd[1580]: time="2026-04-21T04:04:24.568999028Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 21 04:04:24.579954 containerd[1580]: time="2026-04-21T04:04:24.579104833Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 21 04:04:24.586622 containerd[1580]: time="2026-04-21T04:04:24.584666562Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 21 04:04:24.597353 containerd[1580]: time="2026-04-21T04:04:24.585299275Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 21 04:04:24.597353 containerd[1580]: time="2026-04-21T04:04:24.595322580Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 21 04:04:24.666683 containerd[1580]: time="2026-04-21T04:04:24.655438731Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 21 04:04:24.691198 containerd[1580]: time="2026-04-21T04:04:24.683406066Z" level=info msg="metadata content store policy set" policy=shared Apr 21 04:04:24.808474 systemd-logind[1564]: New session 1 of user core. Apr 21 04:04:24.816598 containerd[1580]: time="2026-04-21T04:04:24.816070443Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 21 04:04:24.825360 containerd[1580]: time="2026-04-21T04:04:24.824798041Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 21 04:04:24.829979 containerd[1580]: time="2026-04-21T04:04:24.828457935Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 21 04:04:24.829979 containerd[1580]: time="2026-04-21T04:04:24.829436578Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 21 04:04:24.829979 containerd[1580]: time="2026-04-21T04:04:24.829652845Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 21 04:04:24.829979 containerd[1580]: time="2026-04-21T04:04:24.829675543Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 21 04:04:24.837082 containerd[1580]: time="2026-04-21T04:04:24.836972231Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 21 04:04:24.848022 containerd[1580]: time="2026-04-21T04:04:24.845468021Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 21 04:04:24.870624 containerd[1580]: time="2026-04-21T04:04:24.864204518Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 21 04:04:24.868285 locksmithd[1665]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.872438778Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.872687103Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.872756403Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.873805598Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.873862357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.873893285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.874124305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.874157221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.874176189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.874284436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.874381340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.874428210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.874451912Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.874465401Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 21 04:04:25.000300 containerd[1580]: time="2026-04-21T04:04:24.874879744Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 21 04:04:25.004272 containerd[1580]: time="2026-04-21T04:04:24.874959369Z" level=info msg="Start snapshots syncer" Apr 21 04:04:25.004272 containerd[1580]: time="2026-04-21T04:04:24.878656339Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 21 04:04:25.004272 containerd[1580]: time="2026-04-21T04:04:24.889668339Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 21 04:04:25.005751 containerd[1580]: time="2026-04-21T04:04:24.891165822Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 21 04:04:25.005751 containerd[1580]: time="2026-04-21T04:04:24.903522372Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 21 04:04:25.008774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 04:04:25.018762 containerd[1580]: time="2026-04-21T04:04:25.018071549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 21 04:04:25.020041 containerd[1580]: time="2026-04-21T04:04:25.020003830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 21 04:04:25.020208 containerd[1580]: time="2026-04-21T04:04:25.020193240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 21 04:04:25.020293 containerd[1580]: time="2026-04-21T04:04:25.020281239Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 21 04:04:25.025356 containerd[1580]: time="2026-04-21T04:04:25.024768578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 21 04:04:25.028365 containerd[1580]: time="2026-04-21T04:04:25.027324651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 21 04:04:25.031856 containerd[1580]: time="2026-04-21T04:04:25.030553636Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 21 04:04:25.031856 containerd[1580]: time="2026-04-21T04:04:25.031264088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 21 04:04:25.031856 containerd[1580]: time="2026-04-21T04:04:25.031304775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 21 04:04:25.031856 containerd[1580]: time="2026-04-21T04:04:25.031457445Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 21 04:04:25.048291 containerd[1580]: time="2026-04-21T04:04:25.046432990Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051032593Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051278437Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051310336Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051330284Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051368888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051645247Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051861793Z" level=info msg="runtime interface created" Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051870957Z" level=info msg="created NRI interface" Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051881321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.051929623Z" level=info msg="Connect containerd service" Apr 21 04:04:25.052436 containerd[1580]: time="2026-04-21T04:04:25.052172361Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 04:04:25.077099 containerd[1580]: time="2026-04-21T04:04:25.075160060Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 04:04:25.195519 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 04:04:25.355507 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 04:04:25.471882 systemd-logind[1564]: New session c1 of user core. Apr 21 04:04:26.515999 containerd[1580]: time="2026-04-21T04:04:26.511537743Z" level=info msg="Start subscribing containerd event" Apr 21 04:04:26.662203 containerd[1580]: time="2026-04-21T04:04:26.606331685Z" level=info msg="Start recovering state" Apr 21 04:04:26.662203 containerd[1580]: time="2026-04-21T04:04:26.618680308Z" level=info msg="Start event monitor" Apr 21 04:04:26.662203 containerd[1580]: time="2026-04-21T04:04:26.620379839Z" level=info msg="Start cni network conf syncer for default" Apr 21 04:04:26.662203 containerd[1580]: time="2026-04-21T04:04:26.620539990Z" level=info msg="Start streaming server" Apr 21 04:04:26.662203 containerd[1580]: time="2026-04-21T04:04:26.623356709Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 21 04:04:26.662203 containerd[1580]: time="2026-04-21T04:04:26.625185767Z" level=info msg="runtime interface starting up..." Apr 21 04:04:26.662203 containerd[1580]: time="2026-04-21T04:04:26.625579636Z" level=info msg="starting plugins..." Apr 21 04:04:26.662203 containerd[1580]: time="2026-04-21T04:04:26.628161368Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 21 04:04:26.662203 containerd[1580]: time="2026-04-21T04:04:26.660930410Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 04:04:26.703400 containerd[1580]: time="2026-04-21T04:04:26.699444628Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 04:04:26.725900 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 04:04:26.804785 containerd[1580]: time="2026-04-21T04:04:26.745476990Z" level=info msg="containerd successfully booted in 2.793151s" Apr 21 04:04:26.920028 tar[1579]: linux-amd64/README.md Apr 21 04:04:27.111244 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 04:04:27.567739 systemd[1676]: Queued start job for default target default.target. Apr 21 04:04:27.584421 systemd[1676]: Created slice app.slice - User Application Slice. Apr 21 04:04:27.584504 systemd[1676]: Reached target paths.target - Paths. Apr 21 04:04:27.585299 systemd[1676]: Reached target timers.target - Timers. Apr 21 04:04:27.608167 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 04:04:27.809327 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 04:04:27.813617 systemd[1676]: Reached target sockets.target - Sockets. Apr 21 04:04:27.815142 systemd[1676]: Reached target basic.target - Basic System. Apr 21 04:04:27.816781 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 04:04:27.823395 systemd[1676]: Reached target default.target - Main User Target. Apr 21 04:04:27.823772 systemd[1676]: Startup finished in 2.147s. Apr 21 04:04:27.856372 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 04:04:28.155427 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:59842.service - OpenSSH per-connection server daemon (10.0.0.1:59842). Apr 21 04:04:28.831180 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 59842 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:04:28.916348 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:04:28.981141 systemd-logind[1564]: New session 2 of user core. Apr 21 04:04:29.003113 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 04:04:29.235255 sshd[1705]: Connection closed by 10.0.0.1 port 59842 Apr 21 04:04:29.236015 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Apr 21 04:04:29.323124 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:59842.service: Deactivated successfully. Apr 21 04:04:29.353879 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 04:04:29.366105 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Apr 21 04:04:29.419645 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:59858.service - OpenSSH per-connection server daemon (10.0.0.1:59858). Apr 21 04:04:29.430191 systemd-logind[1564]: Removed session 2. Apr 21 04:04:30.347917 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 59858 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:04:30.356032 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:04:30.602865 systemd-logind[1564]: New session 3 of user core. Apr 21 04:04:30.720649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:04:30.805013 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 04:04:30.809096 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:04:30.809486 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 04:04:30.811235 systemd[1]: Startup finished in 8.296s (kernel) + 30.808s (initrd) + 32.299s (userspace) = 1min 11.404s. Apr 21 04:04:31.189578 sshd[1724]: Connection closed by 10.0.0.1 port 59858 Apr 21 04:04:31.191234 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Apr 21 04:04:31.297462 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:59858.service: Deactivated successfully. Apr 21 04:04:31.355255 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 04:04:31.379791 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Apr 21 04:04:31.405115 systemd-logind[1564]: Removed session 3. Apr 21 04:04:39.041500 kubelet[1722]: E0421 04:04:39.039800 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:04:39.094276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:04:39.094914 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:04:39.106013 systemd[1]: kubelet.service: Consumed 6.672s CPU time, 272.2M memory peak. Apr 21 04:04:41.427631 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:33010.service - OpenSSH per-connection server daemon (10.0.0.1:33010). Apr 21 04:04:42.458226 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 33010 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:04:42.549520 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:04:43.070297 systemd-logind[1564]: New session 4 of user core. Apr 21 04:04:43.105813 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 04:04:43.581391 sshd[1740]: Connection closed by 10.0.0.1 port 33010 Apr 21 04:04:43.673470 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Apr 21 04:04:43.753757 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:33010.service: Deactivated successfully. Apr 21 04:04:43.818354 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 04:04:43.869765 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Apr 21 04:04:43.948479 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:33026.service - OpenSSH per-connection server daemon (10.0.0.1:33026). Apr 21 04:04:43.957631 systemd-logind[1564]: Removed session 4. Apr 21 04:04:45.073673 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 33026 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:04:45.121945 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:04:45.468594 systemd-logind[1564]: New session 5 of user core. Apr 21 04:04:45.638449 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 04:04:46.101539 sshd[1749]: Connection closed by 10.0.0.1 port 33026 Apr 21 04:04:46.108831 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Apr 21 04:04:46.298492 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:33026.service: Deactivated successfully. Apr 21 04:04:46.350993 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 04:04:46.413098 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Apr 21 04:04:46.553272 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:36048.service - OpenSSH per-connection server daemon (10.0.0.1:36048). Apr 21 04:04:46.573227 systemd-logind[1564]: Removed session 5. Apr 21 04:04:47.577958 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 36048 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:04:47.650863 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:04:47.942864 systemd-logind[1564]: New session 6 of user core. Apr 21 04:04:48.024070 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 04:04:48.629461 sshd[1758]: Connection closed by 10.0.0.1 port 36048 Apr 21 04:04:48.637554 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Apr 21 04:04:48.790756 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:36048.service: Deactivated successfully. Apr 21 04:04:48.823962 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 04:04:48.870641 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Apr 21 04:04:49.041010 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:36056.service - OpenSSH per-connection server daemon (10.0.0.1:36056). Apr 21 04:04:49.107617 systemd-logind[1564]: Removed session 6. Apr 21 04:04:49.127077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 04:04:49.232256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:04:49.876045 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 36056 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:04:49.953094 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:04:50.246526 systemd-logind[1564]: New session 7 of user core. Apr 21 04:04:50.410425 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 04:04:51.058828 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 04:04:51.079343 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 04:04:51.289648 sudo[1771]: pam_unix(sudo:session): session closed for user root Apr 21 04:04:51.350773 sshd[1770]: Connection closed by 10.0.0.1 port 36056 Apr 21 04:04:51.351782 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Apr 21 04:04:51.501816 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:36056.service: Deactivated successfully. Apr 21 04:04:51.649462 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 04:04:51.717897 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Apr 21 04:04:51.773282 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:36066.service - OpenSSH per-connection server daemon (10.0.0.1:36066). Apr 21 04:04:51.836127 systemd-logind[1564]: Removed session 7. Apr 21 04:04:52.016534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:04:52.087343 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:04:52.471572 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 36066 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:04:52.631683 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:04:52.835735 systemd-logind[1564]: New session 8 of user core. Apr 21 04:04:52.947454 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 04:04:53.260080 kubelet[1784]: E0421 04:04:53.259678 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:04:53.365674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:04:53.379438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:04:53.441301 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 04:04:53.447482 systemd[1]: kubelet.service: Consumed 1.741s CPU time, 110.2M memory peak. Apr 21 04:04:53.460101 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 04:04:53.573371 sudo[1794]: pam_unix(sudo:session): session closed for user root Apr 21 04:04:53.863572 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 21 04:04:53.926042 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 04:04:54.235050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 21 04:04:55.330834 augenrules[1817]: No rules Apr 21 04:04:55.361363 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 04:04:55.369714 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 21 04:04:55.389898 sudo[1793]: pam_unix(sudo:session): session closed for user root Apr 21 04:04:55.412630 sshd[1792]: Connection closed by 10.0.0.1 port 36066 Apr 21 04:04:55.423845 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Apr 21 04:04:55.612304 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:36066.service: Deactivated successfully. Apr 21 04:04:55.683403 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 04:04:55.802818 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Apr 21 04:04:55.861310 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:44304.service - OpenSSH per-connection server daemon (10.0.0.1:44304). Apr 21 04:04:55.952686 systemd-logind[1564]: Removed session 8. Apr 21 04:04:57.189312 sshd[1826]: Accepted publickey for core from 10.0.0.1 port 44304 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:04:57.222947 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:04:57.498942 systemd-logind[1564]: New session 9 of user core. Apr 21 04:04:57.559126 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 04:04:58.569871 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 04:04:58.628910 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 04:05:03.471863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 04:05:03.646206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:05:06.779248 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 04:05:06.950171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:05:06.959169 (dockerd)[1858]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 04:05:07.072789 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:05:08.419227 update_engine[1573]: I20260421 04:05:08.414865 1573 update_attempter.cc:509] Updating boot flags... Apr 21 04:05:09.294496 kubelet[1860]: E0421 04:05:09.292633 1860 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:05:09.380014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:05:09.412367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:05:09.478303 systemd[1]: kubelet.service: Consumed 2.424s CPU time, 108.6M memory peak. Apr 21 04:05:11.692391 dockerd[1858]: time="2026-04-21T04:05:11.690839901Z" level=info msg="Starting up" Apr 21 04:05:11.754310 dockerd[1858]: time="2026-04-21T04:05:11.752356460Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 21 04:05:12.370071 dockerd[1858]: time="2026-04-21T04:05:12.369166342Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 21 04:05:13.346302 dockerd[1858]: time="2026-04-21T04:05:13.343636591Z" level=info msg="Loading containers: start." Apr 21 04:05:13.651502 kernel: Initializing XFRM netlink socket Apr 21 04:05:19.437221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 21 04:05:19.547666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:05:22.867415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:05:22.948594 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:05:24.581154 kubelet[2018]: E0421 04:05:24.578913 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:05:24.593829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:05:24.594034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:05:24.594985 systemd[1]: kubelet.service: Consumed 2.109s CPU time, 110M memory peak. Apr 21 04:05:27.160950 systemd-networkd[1483]: docker0: Link UP Apr 21 04:05:27.333116 dockerd[1858]: time="2026-04-21T04:05:27.329370037Z" level=info msg="Loading containers: done." Apr 21 04:05:27.906448 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1545578234-merged.mount: Deactivated successfully. Apr 21 04:05:27.929971 dockerd[1858]: time="2026-04-21T04:05:27.927329073Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 04:05:27.936303 dockerd[1858]: time="2026-04-21T04:05:27.934949054Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 21 04:05:27.956523 dockerd[1858]: time="2026-04-21T04:05:27.946278963Z" level=info msg="Initializing buildkit" Apr 21 04:05:29.256400 dockerd[1858]: time="2026-04-21T04:05:29.255330801Z" level=info msg="Completed buildkit initialization" Apr 21 04:05:29.662344 dockerd[1858]: time="2026-04-21T04:05:29.653173087Z" level=info msg="Daemon has completed initialization" Apr 21 04:05:29.672004 dockerd[1858]: time="2026-04-21T04:05:29.662542098Z" level=info msg="API listen on /run/docker.sock" Apr 21 04:05:29.678365 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 04:05:34.680356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 21 04:05:34.724482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:05:35.980383 containerd[1580]: time="2026-04-21T04:05:35.979425257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 04:05:36.925197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:05:37.148155 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:05:37.964544 kubelet[2132]: E0421 04:05:37.963175 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:05:38.054488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:05:38.100424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:05:38.125160 systemd[1]: kubelet.service: Consumed 1.524s CPU time, 110.6M memory peak. Apr 21 04:05:40.062307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849492170.mount: Deactivated successfully. Apr 21 04:05:48.248482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 21 04:05:48.302278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:05:51.606297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:05:51.793422 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:05:53.446887 kubelet[2206]: E0421 04:05:53.445984 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:05:53.482146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:05:53.517251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:05:53.532603 systemd[1]: kubelet.service: Consumed 2.255s CPU time, 110.6M memory peak. Apr 21 04:06:00.051431 containerd[1580]: time="2026-04-21T04:06:00.048962046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:06:00.061458 containerd[1580]: time="2026-04-21T04:06:00.052102472Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 21 04:06:00.061458 containerd[1580]: time="2026-04-21T04:06:00.064657888Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:06:00.259641 containerd[1580]: time="2026-04-21T04:06:00.257319237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:06:00.259641 containerd[1580]: time="2026-04-21T04:06:00.259724053Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 24.276375658s" Apr 21 04:06:00.259641 containerd[1580]: time="2026-04-21T04:06:00.259836374Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 04:06:00.279257 containerd[1580]: time="2026-04-21T04:06:00.278179270Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 04:06:03.731777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 21 04:06:03.910338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:06:06.977215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:06:07.217343 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:06:08.282225 kubelet[2226]: E0421 04:06:08.277743 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:06:08.317623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:06:08.317947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:06:08.330515 systemd[1]: kubelet.service: Consumed 1.718s CPU time, 109.9M memory peak. Apr 21 04:06:15.333959 containerd[1580]: time="2026-04-21T04:06:15.333026772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:06:15.361431 containerd[1580]: time="2026-04-21T04:06:15.360945575Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 21 04:06:15.383278 containerd[1580]: time="2026-04-21T04:06:15.381912110Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:06:15.482285 containerd[1580]: time="2026-04-21T04:06:15.481750447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:06:15.502848 containerd[1580]: time="2026-04-21T04:06:15.498427275Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 15.218374929s" Apr 21 04:06:15.502848 containerd[1580]: time="2026-04-21T04:06:15.498843035Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 04:06:15.511085 containerd[1580]: time="2026-04-21T04:06:15.510266138Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 04:06:18.435889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 21 04:06:18.604552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:06:21.227103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:06:21.410128 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:06:22.851106 kubelet[2248]: E0421 04:06:22.828676 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:06:22.907536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:06:22.937743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:06:23.059452 systemd[1]: kubelet.service: Consumed 1.792s CPU time, 111M memory peak. Apr 21 04:06:27.785115 containerd[1580]: time="2026-04-21T04:06:27.781790962Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 21 04:06:27.878564 containerd[1580]: time="2026-04-21T04:06:27.799289025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:06:27.900738 containerd[1580]: time="2026-04-21T04:06:27.894634474Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:06:27.978232 containerd[1580]: time="2026-04-21T04:06:27.975216933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:06:28.049668 containerd[1580]: time="2026-04-21T04:06:28.044056891Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 12.528964854s" Apr 21 04:06:28.049668 containerd[1580]: time="2026-04-21T04:06:28.044809494Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 04:06:28.096158 containerd[1580]: time="2026-04-21T04:06:28.095670005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 04:06:32.953596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 21 04:06:33.046085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:06:37.452684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:06:37.735324 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:06:39.822384 kubelet[2269]: E0421 04:06:39.819017 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:06:39.892355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:06:39.937912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:06:39.963143 systemd[1]: kubelet.service: Consumed 2.299s CPU time, 114.4M memory peak. Apr 21 04:06:49.766724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482389724.mount: Deactivated successfully. Apr 21 04:06:49.940151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 21 04:06:50.044158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:06:54.355133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:06:54.403919 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:06:57.478789 kubelet[2290]: E0421 04:06:57.477905 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:06:57.555344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:06:57.570241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:06:57.603144 systemd[1]: kubelet.service: Consumed 2.763s CPU time, 110.5M memory peak. Apr 21 04:07:03.382738 containerd[1580]: time="2026-04-21T04:07:03.380418540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:07:03.431293 containerd[1580]: time="2026-04-21T04:07:03.398231906Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 21 04:07:03.444369 containerd[1580]: time="2026-04-21T04:07:03.443662330Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:07:03.503472 containerd[1580]: time="2026-04-21T04:07:03.503016794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:07:03.627975 containerd[1580]: time="2026-04-21T04:07:03.627062865Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 35.528709764s" Apr 21 04:07:03.639982 containerd[1580]: time="2026-04-21T04:07:03.632389344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 04:07:03.680444 containerd[1580]: time="2026-04-21T04:07:03.679095393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 04:07:07.682435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 21 04:07:07.981500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:07:08.151615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826937695.mount: Deactivated successfully. Apr 21 04:07:11.053318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:07:11.263424 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:07:13.247977 kubelet[2319]: E0421 04:07:13.234105 2319 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:07:13.400460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:07:13.435323 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:07:13.503520 systemd[1]: kubelet.service: Consumed 1.796s CPU time, 110M memory peak. Apr 21 04:07:23.495623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 21 04:07:23.678529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:07:28.310038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:07:28.483391 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:07:30.997904 kubelet[2376]: E0421 04:07:30.993080 2376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:07:31.051788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:07:31.052335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:07:31.072383 systemd[1]: kubelet.service: Consumed 2.441s CPU time, 110.8M memory peak. Apr 21 04:07:31.439497 containerd[1580]: time="2026-04-21T04:07:31.435335866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:07:31.439497 containerd[1580]: time="2026-04-21T04:07:31.439306127Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 21 04:07:31.480297 containerd[1580]: time="2026-04-21T04:07:31.449183497Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:07:31.598998 containerd[1580]: time="2026-04-21T04:07:31.595143948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:07:31.754643 containerd[1580]: time="2026-04-21T04:07:31.744613899Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 28.062352318s" Apr 21 04:07:31.754643 containerd[1580]: time="2026-04-21T04:07:31.747992221Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 04:07:31.873885 containerd[1580]: time="2026-04-21T04:07:31.865649388Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 04:07:34.639901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838652574.mount: Deactivated successfully. Apr 21 04:07:34.693924 containerd[1580]: time="2026-04-21T04:07:34.692171234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 04:07:34.724325 containerd[1580]: time="2026-04-21T04:07:34.718442193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 21 04:07:34.761529 containerd[1580]: time="2026-04-21T04:07:34.757467625Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 04:07:34.854267 containerd[1580]: time="2026-04-21T04:07:34.853760769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 04:07:34.882089 containerd[1580]: time="2026-04-21T04:07:34.875752523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.992992116s" Apr 21 04:07:34.882089 containerd[1580]: time="2026-04-21T04:07:34.876124459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 04:07:34.969205 containerd[1580]: time="2026-04-21T04:07:34.911157265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 04:07:37.845658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603571307.mount: Deactivated successfully. Apr 21 04:07:41.187236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 21 04:07:41.392767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:07:43.527753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:07:43.795602 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:07:45.326665 kubelet[2411]: E0421 04:07:45.322465 2411 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:07:45.430353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:07:45.434263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:07:45.443088 systemd[1]: kubelet.service: Consumed 1.743s CPU time, 110.4M memory peak. Apr 21 04:07:55.754277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 21 04:07:55.792628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:07:56.742178 containerd[1580]: time="2026-04-21T04:07:56.714255613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:07:56.752176 containerd[1580]: time="2026-04-21T04:07:56.746326078Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 21 04:07:56.766881 containerd[1580]: time="2026-04-21T04:07:56.766138507Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:07:56.883288 containerd[1580]: time="2026-04-21T04:07:56.882514425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:07:56.939222 containerd[1580]: time="2026-04-21T04:07:56.938459723Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 22.012953193s" Apr 21 04:07:56.939222 containerd[1580]: time="2026-04-21T04:07:56.939159273Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 04:07:57.857061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:07:57.910115 (kubelet)[2486]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:07:58.797391 kubelet[2486]: E0421 04:07:58.795269 2486 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 04:07:58.848150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 04:07:58.853465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 04:07:58.867411 systemd[1]: kubelet.service: Consumed 1.946s CPU time, 110.5M memory peak. Apr 21 04:08:08.963449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 21 04:08:09.019618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:08:11.757088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:08:11.943241 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 04:08:12.354270 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:08:12.405022 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 04:08:12.411559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:08:12.414687 systemd[1]: kubelet.service: Consumed 1.571s CPU time, 104.8M memory peak. Apr 21 04:08:12.575446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:08:13.078797 systemd[1]: Reload requested from client PID 2533 ('systemctl') (unit session-9.scope)... Apr 21 04:08:13.084512 systemd[1]: Reloading... Apr 21 04:08:14.928494 zram_generator::config[2582]: No configuration found. Apr 21 04:08:23.894352 systemd[1]: Reloading finished in 10789 ms. Apr 21 04:08:24.775993 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 04:08:24.778672 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 04:08:24.781167 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:08:24.781527 systemd[1]: kubelet.service: Consumed 1.008s CPU time, 98.3M memory peak. Apr 21 04:08:24.832234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:08:28.155847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:08:28.376498 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 04:08:30.374995 kubelet[2624]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 04:08:30.394410 kubelet[2624]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 04:08:30.394410 kubelet[2624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 04:08:30.394410 kubelet[2624]: I0421 04:08:30.379384 2624 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 04:08:33.341953 kubelet[2624]: I0421 04:08:33.341174 2624 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 04:08:33.341953 kubelet[2624]: I0421 04:08:33.341812 2624 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 04:08:33.352051 kubelet[2624]: I0421 04:08:33.343877 2624 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 04:08:33.564339 kubelet[2624]: E0421 04:08:33.563120 2624 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 04:08:33.648442 kubelet[2624]: I0421 04:08:33.599221 2624 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 04:08:33.770788 kubelet[2624]: I0421 04:08:33.768129 2624 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 21 04:08:33.982999 kubelet[2624]: I0421 04:08:33.981763 2624 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 04:08:34.067297 kubelet[2624]: I0421 04:08:34.064169 2624 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 04:08:34.077157 kubelet[2624]: I0421 04:08:34.068652 2624 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 04:08:34.083057 kubelet[2624]: I0421 04:08:34.078926 2624 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 04:08:34.083057 kubelet[2624]: I0421 04:08:34.080448 2624 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 04:08:34.090256 kubelet[2624]: I0421 04:08:34.087140 2624 state_mem.go:36] "Initialized new in-memory state store" Apr 21 04:08:34.129523 kubelet[2624]: I0421 04:08:34.127866 2624 kubelet.go:480] "Attempting to sync node with API server" Apr 21 04:08:34.129523 kubelet[2624]: I0421 04:08:34.129568 2624 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 04:08:34.149320 kubelet[2624]: I0421 04:08:34.138636 2624 kubelet.go:386] "Adding apiserver pod source" Apr 21 04:08:34.149320 kubelet[2624]: I0421 04:08:34.140540 2624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 04:08:34.197118 kubelet[2624]: E0421 04:08:34.193375 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 04:08:34.197118 kubelet[2624]: E0421 04:08:34.194033 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 04:08:34.254966 kubelet[2624]: I0421 04:08:34.250052 2624 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 21 04:08:34.258565 kubelet[2624]: I0421 04:08:34.256804 2624 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 04:08:34.265570 kubelet[2624]: W0421 04:08:34.264844 2624 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 04:08:34.365418 kubelet[2624]: I0421 04:08:34.364738 2624 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 04:08:34.374075 kubelet[2624]: I0421 04:08:34.367430 2624 server.go:1289] "Started kubelet" Apr 21 04:08:34.374344 kubelet[2624]: I0421 04:08:34.373731 2624 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 04:08:34.386002 kubelet[2624]: I0421 04:08:34.382081 2624 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 04:08:34.386002 kubelet[2624]: I0421 04:08:34.383137 2624 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 04:08:34.396788 kubelet[2624]: I0421 04:08:34.396218 2624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 04:08:34.407230 kubelet[2624]: E0421 04:08:34.395401 2624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a843b8e36066c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 04:08:34.365449927 +0000 UTC m=+5.784269367,LastTimestamp:2026-04-21 04:08:34.365449927 +0000 UTC m=+5.784269367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 04:08:34.459159 kubelet[2624]: I0421 04:08:34.402739 2624 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 04:08:34.479455 kubelet[2624]: I0421 04:08:34.445386 2624 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 04:08:34.492556 kubelet[2624]: I0421 04:08:34.445637 2624 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 04:08:34.501122 kubelet[2624]: E0421 04:08:34.446233 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:34.501122 kubelet[2624]: I0421 04:08:34.458785 2624 server.go:317] "Adding debug handlers to kubelet server" Apr 21 04:08:34.501122 kubelet[2624]: I0421 04:08:34.494129 2624 reconciler.go:26] "Reconciler: start to sync state" Apr 21 04:08:34.501122 kubelet[2624]: E0421 04:08:34.496789 2624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Apr 21 04:08:34.501122 kubelet[2624]: E0421 04:08:34.497870 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 04:08:34.501122 kubelet[2624]: I0421 04:08:34.498140 2624 factory.go:223] Registration of the systemd container factory successfully Apr 21 04:08:34.549014 kubelet[2624]: I0421 04:08:34.519380 2624 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 04:08:34.577507 kubelet[2624]: E0421 04:08:34.577072 2624 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 04:08:34.581292 kubelet[2624]: I0421 04:08:34.579332 2624 factory.go:223] Registration of the containerd container factory successfully Apr 21 04:08:34.605519 kubelet[2624]: E0421 04:08:34.603277 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:34.738872 kubelet[2624]: E0421 04:08:34.738220 2624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Apr 21 04:08:34.738872 kubelet[2624]: E0421 04:08:34.738536 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:34.845618 kubelet[2624]: E0421 04:08:34.842723 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:34.948750 kubelet[2624]: E0421 04:08:34.946358 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:34.979998 kubelet[2624]: I0421 04:08:34.979533 2624 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 04:08:34.979998 kubelet[2624]: I0421 04:08:34.979785 2624 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 04:08:34.985783 kubelet[2624]: I0421 04:08:34.985045 2624 state_mem.go:36] "Initialized new in-memory state store" Apr 21 04:08:34.998309 kubelet[2624]: I0421 04:08:34.997501 2624 policy_none.go:49] "None policy: Start" Apr 21 04:08:35.000340 kubelet[2624]: I0421 04:08:35.000018 2624 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 04:08:35.003346 kubelet[2624]: I0421 04:08:35.002136 2624 state_mem.go:35] "Initializing new in-memory state store" Apr 21 04:08:35.053427 kubelet[2624]: E0421 04:08:35.052471 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:35.087199 kubelet[2624]: I0421 04:08:35.085930 2624 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 04:08:35.108214 kubelet[2624]: I0421 04:08:35.101921 2624 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 04:08:35.108214 kubelet[2624]: I0421 04:08:35.107301 2624 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 04:08:35.110101 kubelet[2624]: I0421 04:08:35.108554 2624 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 04:08:35.110101 kubelet[2624]: I0421 04:08:35.108610 2624 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 04:08:35.110101 kubelet[2624]: E0421 04:08:35.109225 2624 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 04:08:35.127789 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 04:08:35.138123 kubelet[2624]: E0421 04:08:35.137291 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 04:08:35.155327 kubelet[2624]: E0421 04:08:35.154445 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:35.157573 kubelet[2624]: E0421 04:08:35.157328 2624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Apr 21 04:08:35.238891 kubelet[2624]: E0421 04:08:35.238276 2624 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 04:08:35.257176 kubelet[2624]: E0421 04:08:35.256438 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:35.300566 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 04:08:35.357928 kubelet[2624]: E0421 04:08:35.357530 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:35.385387 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 04:08:35.445342 kubelet[2624]: E0421 04:08:35.444960 2624 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 04:08:35.460119 kubelet[2624]: E0421 04:08:35.459158 2624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 04:08:35.485092 kubelet[2624]: E0421 04:08:35.482594 2624 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 04:08:35.488874 kubelet[2624]: I0421 04:08:35.488781 2624 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 04:08:35.491542 kubelet[2624]: I0421 04:08:35.488937 2624 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 04:08:35.495584 kubelet[2624]: I0421 04:08:35.495512 2624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 04:08:35.523198 kubelet[2624]: E0421 04:08:35.521526 2624 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 04:08:35.531507 kubelet[2624]: E0421 04:08:35.527947 2624 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 04:08:35.534646 kubelet[2624]: E0421 04:08:35.532945 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 04:08:35.614138 kubelet[2624]: E0421 04:08:35.613291 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 04:08:35.617217 kubelet[2624]: I0421 04:08:35.617023 2624 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 04:08:35.619279 kubelet[2624]: E0421 04:08:35.619192 2624 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Apr 21 04:08:35.761552 kubelet[2624]: E0421 04:08:35.760277 2624 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 04:08:35.869491 kubelet[2624]: E0421 04:08:35.869215 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 04:08:35.890574 kubelet[2624]: I0421 04:08:35.889799 2624 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 04:08:35.899741 kubelet[2624]: E0421 04:08:35.899123 2624 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Apr 21 04:08:35.962297 kubelet[2624]: I0421 04:08:35.959928 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a7d953910b62a73d737b747e25ca9b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a7d953910b62a73d737b747e25ca9b9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 04:08:35.969466 kubelet[2624]: I0421 04:08:35.968657 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a7d953910b62a73d737b747e25ca9b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a7d953910b62a73d737b747e25ca9b9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 04:08:35.971425 kubelet[2624]: I0421 04:08:35.969609 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a7d953910b62a73d737b747e25ca9b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a7d953910b62a73d737b747e25ca9b9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 04:08:35.971425 kubelet[2624]: E0421 04:08:35.970063 2624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="1.6s" Apr 21 04:08:36.080547 kubelet[2624]: I0421 04:08:36.075509 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:08:36.080547 kubelet[2624]: I0421 04:08:36.076973 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:08:36.080547 kubelet[2624]: I0421 04:08:36.077328 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:08:36.080547 kubelet[2624]: I0421 04:08:36.077391 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:08:36.080547 kubelet[2624]: I0421 04:08:36.077422 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:08:36.085072 kubelet[2624]: I0421 04:08:36.077528 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 04:08:36.169506 systemd[1]: Created slice kubepods-burstable-pod1a7d953910b62a73d737b747e25ca9b9.slice - libcontainer container kubepods-burstable-pod1a7d953910b62a73d737b747e25ca9b9.slice. Apr 21 04:08:36.249121 kubelet[2624]: E0421 04:08:36.248531 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:36.257394 kubelet[2624]: E0421 04:08:36.256865 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:36.294149 containerd[1580]: time="2026-04-21T04:08:36.292894822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a7d953910b62a73d737b747e25ca9b9,Namespace:kube-system,Attempt:0,}" Apr 21 04:08:36.350240 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 21 04:08:36.411987 kubelet[2624]: I0421 04:08:36.411063 2624 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 04:08:36.429951 kubelet[2624]: E0421 04:08:36.428951 2624 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Apr 21 04:08:36.434249 kubelet[2624]: E0421 04:08:36.433633 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:36.437119 kubelet[2624]: E0421 04:08:36.437040 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:36.448888 containerd[1580]: time="2026-04-21T04:08:36.448582746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 21 04:08:36.480961 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 21 04:08:36.565033 kubelet[2624]: E0421 04:08:36.564023 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:36.577418 kubelet[2624]: E0421 04:08:36.572674 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:36.587934 containerd[1580]: time="2026-04-21T04:08:36.586771012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 21 04:08:36.660163 containerd[1580]: time="2026-04-21T04:08:36.646460003Z" level=info msg="connecting to shim 9a740555c7ef8d6679abb2c0d5acaafacba6b6c0c01e5f9a1f5b080d25dc5d92" address="unix:///run/containerd/s/82664aaa3fe1983d643a6a3bb53f59a2a63537a6fbdc5ac452c5c5eb64c2c0a6" namespace=k8s.io protocol=ttrpc version=3 Apr 21 04:08:36.699051 kubelet[2624]: E0421 04:08:36.696062 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 04:08:36.881481 containerd[1580]: time="2026-04-21T04:08:36.879992250Z" level=info msg="connecting to shim eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108" address="unix:///run/containerd/s/66c7b441e6dc7bb40ca8341d8cbc21de350810e9a7df5ca41d313268a699e5ed" namespace=k8s.io protocol=ttrpc version=3 Apr 21 04:08:37.026975 containerd[1580]: time="2026-04-21T04:08:37.001664237Z" level=info msg="connecting to shim 798c3643bb0629cfc9f6aa72b4b3a8b90d21784e4f8460b5fe4231058789e52f" address="unix:///run/containerd/s/38bbb6d5d48443d1d7c2d664d2e4fe6fcc6d057a719b12f7a187ab079888adfb" namespace=k8s.io protocol=ttrpc version=3 Apr 21 04:08:37.172946 kubelet[2624]: E0421 04:08:37.170246 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 04:08:37.365151 kubelet[2624]: I0421 04:08:37.299189 2624 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 04:08:37.372840 systemd[1]: Started cri-containerd-eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108.scope - libcontainer container eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108. Apr 21 04:08:37.407175 kubelet[2624]: E0421 04:08:37.402979 2624 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Apr 21 04:08:37.464308 systemd[1]: Started cri-containerd-9a740555c7ef8d6679abb2c0d5acaafacba6b6c0c01e5f9a1f5b080d25dc5d92.scope - libcontainer container 9a740555c7ef8d6679abb2c0d5acaafacba6b6c0c01e5f9a1f5b080d25dc5d92. Apr 21 04:08:37.652576 kubelet[2624]: E0421 04:08:37.600298 2624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="3.2s" Apr 21 04:08:37.655858 systemd[1]: Started cri-containerd-798c3643bb0629cfc9f6aa72b4b3a8b90d21784e4f8460b5fe4231058789e52f.scope - libcontainer container 798c3643bb0629cfc9f6aa72b4b3a8b90d21784e4f8460b5fe4231058789e52f. Apr 21 04:08:37.718187 kubelet[2624]: E0421 04:08:37.716720 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 04:08:38.221561 containerd[1580]: time="2026-04-21T04:08:38.220503725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108\"" Apr 21 04:08:38.285237 kubelet[2624]: E0421 04:08:38.281859 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:38.378015 containerd[1580]: time="2026-04-21T04:08:38.377628110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a7d953910b62a73d737b747e25ca9b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a740555c7ef8d6679abb2c0d5acaafacba6b6c0c01e5f9a1f5b080d25dc5d92\"" Apr 21 04:08:38.399623 kubelet[2624]: E0421 04:08:38.397841 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:38.437271 containerd[1580]: time="2026-04-21T04:08:38.435465902Z" level=info msg="CreateContainer within sandbox \"eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 04:08:38.455394 containerd[1580]: time="2026-04-21T04:08:38.454657282Z" level=info msg="CreateContainer within sandbox \"9a740555c7ef8d6679abb2c0d5acaafacba6b6c0c01e5f9a1f5b080d25dc5d92\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 04:08:38.460290 containerd[1580]: time="2026-04-21T04:08:38.455684357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"798c3643bb0629cfc9f6aa72b4b3a8b90d21784e4f8460b5fe4231058789e52f\"" Apr 21 04:08:38.471157 kubelet[2624]: E0421 04:08:38.470464 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:38.635115 kubelet[2624]: E0421 04:08:38.573259 2624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a843b8e36066c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 04:08:34.365449927 +0000 UTC m=+5.784269367,LastTimestamp:2026-04-21 04:08:34.365449927 +0000 UTC m=+5.784269367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 04:08:38.646900 kubelet[2624]: E0421 04:08:38.645149 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 04:08:38.795717 containerd[1580]: time="2026-04-21T04:08:38.795120535Z" level=info msg="CreateContainer within sandbox \"798c3643bb0629cfc9f6aa72b4b3a8b90d21784e4f8460b5fe4231058789e52f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 04:08:38.862961 containerd[1580]: time="2026-04-21T04:08:38.861172656Z" level=info msg="Container 7c3a9835deaf8b342eebeb8d733d83f4b28a8583ffcaa392569e30730e8ac3d2: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:08:38.872965 containerd[1580]: time="2026-04-21T04:08:38.872438113Z" level=info msg="Container 5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:08:38.959079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141028162.mount: Deactivated successfully. Apr 21 04:08:39.131412 kubelet[2624]: E0421 04:08:39.089426 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 04:08:39.131412 kubelet[2624]: I0421 04:08:39.127396 2624 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 04:08:39.151386 containerd[1580]: time="2026-04-21T04:08:39.148083000Z" level=info msg="CreateContainer within sandbox \"eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31\"" Apr 21 04:08:39.153677 kubelet[2624]: E0421 04:08:39.147050 2624 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Apr 21 04:08:39.257052 containerd[1580]: time="2026-04-21T04:08:39.250964692Z" level=info msg="CreateContainer within sandbox \"9a740555c7ef8d6679abb2c0d5acaafacba6b6c0c01e5f9a1f5b080d25dc5d92\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7c3a9835deaf8b342eebeb8d733d83f4b28a8583ffcaa392569e30730e8ac3d2\"" Apr 21 04:08:39.280158 containerd[1580]: time="2026-04-21T04:08:39.257350264Z" level=info msg="Container 232ffdd5eb036942ee4767504f269bb4e895d8d248fe1fe0ed1e953b8b36d15b: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:08:39.280158 containerd[1580]: time="2026-04-21T04:08:39.267015076Z" level=info msg="StartContainer for \"7c3a9835deaf8b342eebeb8d733d83f4b28a8583ffcaa392569e30730e8ac3d2\"" Apr 21 04:08:39.280158 containerd[1580]: time="2026-04-21T04:08:39.267093161Z" level=info msg="StartContainer for \"5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31\"" Apr 21 04:08:39.331055 containerd[1580]: time="2026-04-21T04:08:39.330538857Z" level=info msg="connecting to shim 5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31" address="unix:///run/containerd/s/66c7b441e6dc7bb40ca8341d8cbc21de350810e9a7df5ca41d313268a699e5ed" protocol=ttrpc version=3 Apr 21 04:08:39.334987 containerd[1580]: time="2026-04-21T04:08:39.333601682Z" level=info msg="CreateContainer within sandbox \"798c3643bb0629cfc9f6aa72b4b3a8b90d21784e4f8460b5fe4231058789e52f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"232ffdd5eb036942ee4767504f269bb4e895d8d248fe1fe0ed1e953b8b36d15b\"" Apr 21 04:08:39.338373 containerd[1580]: time="2026-04-21T04:08:39.336310022Z" level=info msg="connecting to shim 7c3a9835deaf8b342eebeb8d733d83f4b28a8583ffcaa392569e30730e8ac3d2" address="unix:///run/containerd/s/82664aaa3fe1983d643a6a3bb53f59a2a63537a6fbdc5ac452c5c5eb64c2c0a6" protocol=ttrpc version=3 Apr 21 04:08:39.353364 containerd[1580]: time="2026-04-21T04:08:39.351176441Z" level=info msg="StartContainer for \"232ffdd5eb036942ee4767504f269bb4e895d8d248fe1fe0ed1e953b8b36d15b\"" Apr 21 04:08:39.429274 containerd[1580]: time="2026-04-21T04:08:39.428536284Z" level=info msg="connecting to shim 232ffdd5eb036942ee4767504f269bb4e895d8d248fe1fe0ed1e953b8b36d15b" address="unix:///run/containerd/s/38bbb6d5d48443d1d7c2d664d2e4fe6fcc6d057a719b12f7a187ab079888adfb" protocol=ttrpc version=3 Apr 21 04:08:39.919047 kubelet[2624]: E0421 04:08:39.918313 2624 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 04:08:40.266388 systemd[1]: Started cri-containerd-232ffdd5eb036942ee4767504f269bb4e895d8d248fe1fe0ed1e953b8b36d15b.scope - libcontainer container 232ffdd5eb036942ee4767504f269bb4e895d8d248fe1fe0ed1e953b8b36d15b. Apr 21 04:08:40.314918 systemd[1]: Started cri-containerd-5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31.scope - libcontainer container 5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31. Apr 21 04:08:40.481136 systemd[1]: Started cri-containerd-7c3a9835deaf8b342eebeb8d733d83f4b28a8583ffcaa392569e30730e8ac3d2.scope - libcontainer container 7c3a9835deaf8b342eebeb8d733d83f4b28a8583ffcaa392569e30730e8ac3d2. Apr 21 04:08:40.955908 kubelet[2624]: E0421 04:08:40.946685 2624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="6.4s" Apr 21 04:08:41.463969 kubelet[2624]: E0421 04:08:41.463291 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 04:08:41.779516 containerd[1580]: time="2026-04-21T04:08:41.775037352Z" level=info msg="StartContainer for \"232ffdd5eb036942ee4767504f269bb4e895d8d248fe1fe0ed1e953b8b36d15b\" returns successfully" Apr 21 04:08:41.809544 containerd[1580]: time="2026-04-21T04:08:41.809384202Z" level=info msg="StartContainer for \"5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31\" returns successfully" Apr 21 04:08:42.124037 containerd[1580]: time="2026-04-21T04:08:42.089294544Z" level=info msg="StartContainer for \"7c3a9835deaf8b342eebeb8d733d83f4b28a8583ffcaa392569e30730e8ac3d2\" returns successfully" Apr 21 04:08:42.219893 kubelet[2624]: E0421 04:08:42.219069 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 04:08:42.432339 kubelet[2624]: I0421 04:08:42.431440 2624 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 04:08:42.451098 kubelet[2624]: E0421 04:08:42.447646 2624 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Apr 21 04:08:42.537580 kubelet[2624]: E0421 04:08:42.536597 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 04:08:42.698977 kubelet[2624]: E0421 04:08:42.694836 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:42.887105 kubelet[2624]: E0421 04:08:42.881130 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:43.044230 kubelet[2624]: E0421 04:08:43.041184 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:43.046854 kubelet[2624]: E0421 04:08:43.046432 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:43.234084 kubelet[2624]: E0421 04:08:43.232438 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:43.252670 kubelet[2624]: E0421 04:08:43.236517 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:44.596093 kubelet[2624]: E0421 04:08:44.590221 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:44.730326 kubelet[2624]: E0421 04:08:44.680235 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:44.730326 kubelet[2624]: E0421 04:08:44.719089 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:44.730326 kubelet[2624]: E0421 04:08:44.721070 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:44.730326 kubelet[2624]: E0421 04:08:44.722031 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:44.743157 kubelet[2624]: E0421 04:08:44.738123 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:45.462619 kubelet[2624]: E0421 04:08:45.461502 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:45.462619 kubelet[2624]: E0421 04:08:45.462217 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:45.469738 kubelet[2624]: E0421 04:08:45.469614 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:45.471246 kubelet[2624]: E0421 04:08:45.470927 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:45.560940 kubelet[2624]: E0421 04:08:45.560161 2624 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 04:08:48.923144 kubelet[2624]: I0421 04:08:48.922643 2624 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 04:08:50.624675 kubelet[2624]: E0421 04:08:50.622077 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:50.653066 kubelet[2624]: E0421 04:08:50.644378 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:54.391586 kubelet[2624]: E0421 04:08:54.324998 2624 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 04:08:55.573770 kubelet[2624]: E0421 04:08:55.572136 2624 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 04:08:57.368120 kubelet[2624]: E0421 04:08:57.364559 2624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 21 04:08:57.885994 kubelet[2624]: E0421 04:08:57.883505 2624 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 04:08:57.896512 kubelet[2624]: E0421 04:08:57.887560 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:08:58.640654 kubelet[2624]: I0421 04:08:58.639985 2624 apiserver.go:52] "Watching apiserver" Apr 21 04:08:59.338228 kubelet[2624]: I0421 04:08:59.336201 2624 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 04:08:59.665174 kubelet[2624]: E0421 04:08:59.558209 2624 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a843b8e36066c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 04:08:34.365449927 +0000 UTC m=+5.784269367,LastTimestamp:2026-04-21 04:08:34.365449927 +0000 UTC m=+5.784269367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 04:08:59.888758 kubelet[2624]: I0421 04:08:59.880519 2624 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 04:08:59.888758 kubelet[2624]: E0421 04:08:59.888223 2624 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 04:08:59.960511 kubelet[2624]: I0421 04:08:59.956324 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 04:09:01.210886 kubelet[2624]: E0421 04:09:01.204733 2624 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a843b8eff535f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 04:08:34.576528884 +0000 UTC m=+5.995348289,LastTimestamp:2026-04-21 04:08:34.576528884 +0000 UTC m=+5.995348289,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 04:09:01.353462 kubelet[2624]: I0421 04:09:01.324225 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 04:09:02.528938 kubelet[2624]: I0421 04:09:02.527197 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 04:09:02.659912 kubelet[2624]: E0421 04:09:02.627871 2624 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a843b907b2322c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 04:08:34.974790188 +0000 UTC m=+6.393609599,LastTimestamp:2026-04-21 04:08:34.974790188 +0000 UTC m=+6.393609599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 04:09:02.659912 kubelet[2624]: E0421 04:09:02.633055 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:09:02.659912 kubelet[2624]: E0421 04:09:02.653114 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:09:03.200964 kubelet[2624]: E0421 04:09:03.198223 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:09:03.361579 kubelet[2624]: E0421 04:09:03.284452 2624 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 04:09:03.361579 kubelet[2624]: I0421 04:09:03.287985 2624 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 04:09:04.394190 kubelet[2624]: E0421 04:09:04.389502 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:09:08.086978 kubelet[2624]: I0421 04:09:08.066158 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.066008711 podStartE2EDuration="7.066008711s" podCreationTimestamp="2026-04-21 04:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 04:09:07.069948025 +0000 UTC m=+38.488767459" watchObservedRunningTime="2026-04-21 04:09:08.066008711 +0000 UTC m=+39.484828117" Apr 21 04:09:08.256899 kubelet[2624]: I0421 04:09:08.147886 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.14737233 podStartE2EDuration="6.14737233s" podCreationTimestamp="2026-04-21 04:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 04:09:08.038832478 +0000 UTC m=+39.457651962" watchObservedRunningTime="2026-04-21 04:09:08.14737233 +0000 UTC m=+39.566191747" Apr 21 04:09:08.648036 kubelet[2624]: I0421 04:09:08.640894 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.639507429 podStartE2EDuration="5.639507429s" podCreationTimestamp="2026-04-21 04:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 04:09:08.638099714 +0000 UTC m=+40.056919155" watchObservedRunningTime="2026-04-21 04:09:08.639507429 +0000 UTC m=+40.058326872" Apr 21 04:09:43.639669 systemd[1]: cri-containerd-5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31.scope: Deactivated successfully. Apr 21 04:09:43.666905 systemd[1]: cri-containerd-5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31.scope: Consumed 3.753s CPU time, 20.1M memory peak. Apr 21 04:09:43.821239 containerd[1580]: time="2026-04-21T04:09:43.817892517Z" level=info msg="received container exit event container_id:\"5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31\" id:\"5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31\" pid:2843 exit_status:1 exited_at:{seconds:1776744583 nanos:784183452}" Apr 21 04:09:46.453067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31-rootfs.mount: Deactivated successfully. Apr 21 04:09:46.987383 kubelet[2624]: E0421 04:09:46.983929 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:09:47.371052 kubelet[2624]: I0421 04:09:47.341040 2624 scope.go:117] "RemoveContainer" containerID="5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31" Apr 21 04:09:47.529469 kubelet[2624]: E0421 04:09:47.515411 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:09:47.878301 containerd[1580]: time="2026-04-21T04:09:47.873569271Z" level=info msg="CreateContainer within sandbox \"eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 04:09:48.572810 containerd[1580]: time="2026-04-21T04:09:48.566870312Z" level=info msg="Container 51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:09:48.667900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912832909.mount: Deactivated successfully. Apr 21 04:09:49.055476 containerd[1580]: time="2026-04-21T04:09:49.054503322Z" level=info msg="CreateContainer within sandbox \"eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260\"" Apr 21 04:09:49.096178 containerd[1580]: time="2026-04-21T04:09:49.090115897Z" level=info msg="StartContainer for \"51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260\"" Apr 21 04:09:49.282396 containerd[1580]: time="2026-04-21T04:09:49.278985185Z" level=info msg="connecting to shim 51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260" address="unix:///run/containerd/s/66c7b441e6dc7bb40ca8341d8cbc21de350810e9a7df5ca41d313268a699e5ed" protocol=ttrpc version=3 Apr 21 04:09:52.000904 systemd[1]: Started cri-containerd-51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260.scope - libcontainer container 51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260. Apr 21 04:09:54.235333 kubelet[2624]: E0421 04:09:54.229949 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:09:54.315312 containerd[1580]: time="2026-04-21T04:09:54.289888389Z" level=error msg="get state for 51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260" error="context deadline exceeded" Apr 21 04:09:54.442981 containerd[1580]: time="2026-04-21T04:09:54.291680859Z" level=warning msg="unknown status" status=0 Apr 21 04:09:56.340558 kubelet[2624]: E0421 04:09:56.337921 2624 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.22s" Apr 21 04:09:56.443209 kubelet[2624]: E0421 04:09:56.436015 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:09:56.752951 containerd[1580]: time="2026-04-21T04:09:56.752316634Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 04:09:58.352573 kubelet[2624]: E0421 04:09:58.351345 2624 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.113s" Apr 21 04:09:58.982969 containerd[1580]: time="2026-04-21T04:09:58.981478910Z" level=info msg="StartContainer for \"51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260\" returns successfully" Apr 21 04:09:59.844073 kubelet[2624]: E0421 04:09:59.842198 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:01.003680 kubelet[2624]: E0421 04:10:01.001301 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:01.127832 systemd[1]: Reload requested from client PID 2979 ('systemctl') (unit session-9.scope)... Apr 21 04:10:01.128114 systemd[1]: Reloading... Apr 21 04:10:02.415512 kubelet[2624]: E0421 04:10:02.410879 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:05.204680 zram_generator::config[3027]: No configuration found. Apr 21 04:10:11.163220 kubelet[2624]: E0421 04:10:11.161655 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:12.255467 kubelet[2624]: E0421 04:10:12.253391 2624 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.139s" Apr 21 04:10:14.361157 systemd[1]: Reloading finished in 13226 ms. Apr 21 04:10:15.043647 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:10:15.266853 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 04:10:15.346284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:10:15.356809 systemd[1]: kubelet.service: Consumed 41.121s CPU time, 140.7M memory peak. Apr 21 04:10:15.617595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 04:10:21.922860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 04:10:22.223592 (kubelet)[3069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 04:10:24.996130 kubelet[3069]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 04:10:25.057254 kubelet[3069]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 04:10:25.057254 kubelet[3069]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 04:10:25.057254 kubelet[3069]: I0421 04:10:25.005389 3069 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 04:10:25.596137 kubelet[3069]: I0421 04:10:25.594894 3069 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 04:10:25.596137 kubelet[3069]: I0421 04:10:25.595195 3069 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 04:10:25.679632 kubelet[3069]: I0421 04:10:25.606816 3069 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 04:10:25.809929 kubelet[3069]: I0421 04:10:25.780267 3069 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 04:10:25.982460 kubelet[3069]: I0421 04:10:25.977467 3069 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 04:10:26.544673 kubelet[3069]: I0421 04:10:26.539183 3069 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 21 04:10:27.000998 kubelet[3069]: I0421 04:10:27.000280 3069 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 04:10:27.055081 kubelet[3069]: I0421 04:10:27.053868 3069 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 04:10:27.065206 kubelet[3069]: I0421 04:10:27.054453 3069 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 04:10:27.067511 kubelet[3069]: I0421 04:10:27.066404 3069 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 04:10:27.067511 kubelet[3069]: I0421 04:10:27.067069 3069 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 04:10:27.072628 kubelet[3069]: I0421 04:10:27.071923 3069 state_mem.go:36] "Initialized new in-memory state store" Apr 21 04:10:27.104321 kubelet[3069]: I0421 04:10:27.103981 3069 kubelet.go:480] "Attempting to sync node with API server" Apr 21 04:10:27.108662 kubelet[3069]: I0421 04:10:27.107978 3069 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 04:10:27.128492 kubelet[3069]: I0421 04:10:27.123663 3069 kubelet.go:386] "Adding apiserver pod source" Apr 21 04:10:27.146482 kubelet[3069]: I0421 04:10:27.141575 3069 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 04:10:27.326129 kubelet[3069]: I0421 04:10:27.302424 3069 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 21 04:10:27.360740 kubelet[3069]: I0421 04:10:27.354771 3069 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 04:10:28.254538 kubelet[3069]: I0421 04:10:28.239754 3069 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 04:10:28.254538 kubelet[3069]: I0421 04:10:28.240634 3069 server.go:1289] "Started kubelet" Apr 21 04:10:28.254538 kubelet[3069]: I0421 04:10:28.246170 3069 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 04:10:28.416458 kubelet[3069]: I0421 04:10:28.392508 3069 apiserver.go:52] "Watching apiserver" Apr 21 04:10:28.463649 sudo[3086]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 21 04:10:28.517989 sudo[3086]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 21 04:10:28.577163 kubelet[3069]: I0421 04:10:28.510825 3069 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 04:10:28.633509 kubelet[3069]: I0421 04:10:28.519764 3069 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 04:10:28.642326 kubelet[3069]: I0421 04:10:28.578538 3069 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 04:10:28.895818 kubelet[3069]: I0421 04:10:28.585310 3069 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 04:10:28.898998 kubelet[3069]: I0421 04:10:28.890144 3069 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 04:10:28.913854 kubelet[3069]: I0421 04:10:28.890300 3069 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 04:10:28.917542 kubelet[3069]: I0421 04:10:28.917246 3069 reconciler.go:26] "Reconciler: start to sync state" Apr 21 04:10:28.924891 kubelet[3069]: I0421 04:10:28.924478 3069 factory.go:223] Registration of the systemd container factory successfully Apr 21 04:10:28.938212 kubelet[3069]: I0421 04:10:28.932570 3069 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 04:10:28.959420 kubelet[3069]: I0421 04:10:28.957915 3069 server.go:317] "Adding debug handlers to kubelet server" Apr 21 04:10:29.517283 kubelet[3069]: W0421 04:10:29.459675 3069 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: connection error: desc = "error reading server preface: read unix @->/run/containerd/containerd.sock: use of closed network connection" Apr 21 04:10:29.851954 kubelet[3069]: E0421 04:10:29.835541 3069 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 04:10:29.868030 kubelet[3069]: I0421 04:10:29.866595 3069 factory.go:223] Registration of the containerd container factory successfully Apr 21 04:10:31.961758 kubelet[3069]: I0421 04:10:31.956271 3069 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 04:10:32.279686 kubelet[3069]: I0421 04:10:32.230672 3069 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 04:10:32.279686 kubelet[3069]: I0421 04:10:32.268462 3069 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 04:10:32.375018 kubelet[3069]: I0421 04:10:32.288468 3069 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 04:10:32.375018 kubelet[3069]: I0421 04:10:32.314011 3069 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 04:10:32.437093 kubelet[3069]: E0421 04:10:32.436262 3069 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 04:10:32.632422 kubelet[3069]: E0421 04:10:32.584255 3069 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 04:10:32.893494 kubelet[3069]: E0421 04:10:32.862371 3069 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 04:10:33.357469 kubelet[3069]: E0421 04:10:33.355373 3069 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 04:10:34.172612 kubelet[3069]: E0421 04:10:34.166492 3069 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 04:10:35.730998 kubelet[3069]: I0421 04:10:35.728340 3069 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 04:10:35.743084 kubelet[3069]: I0421 04:10:35.731149 3069 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 04:10:35.743084 kubelet[3069]: I0421 04:10:35.731812 3069 state_mem.go:36] "Initialized new in-memory state store" Apr 21 04:10:35.743084 kubelet[3069]: I0421 04:10:35.733396 3069 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 04:10:35.743084 kubelet[3069]: I0421 04:10:35.733455 3069 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 04:10:35.743084 kubelet[3069]: I0421 04:10:35.733597 3069 policy_none.go:49] "None policy: Start" Apr 21 04:10:35.743084 kubelet[3069]: I0421 04:10:35.733671 3069 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 04:10:35.743084 kubelet[3069]: I0421 04:10:35.733803 3069 state_mem.go:35] "Initializing new in-memory state store" Apr 21 04:10:35.743084 kubelet[3069]: I0421 04:10:35.734258 3069 state_mem.go:75] "Updated machine memory state" Apr 21 04:10:35.783946 kubelet[3069]: E0421 04:10:35.782826 3069 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 04:10:36.576967 kubelet[3069]: E0421 04:10:36.552491 3069 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 04:10:36.802087 kubelet[3069]: I0421 04:10:36.796246 3069 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 04:10:36.939233 kubelet[3069]: I0421 04:10:36.840231 3069 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 04:10:36.885521 sudo[3086]: pam_unix(sudo:session): session closed for user root Apr 21 04:10:36.970468 kubelet[3069]: I0421 04:10:36.960543 3069 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 04:10:37.335419 kubelet[3069]: E0421 04:10:37.330764 3069 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 04:10:38.623209 kubelet[3069]: I0421 04:10:38.619540 3069 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 04:10:39.215264 kubelet[3069]: I0421 04:10:39.195773 3069 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 04:10:39.337265 kubelet[3069]: I0421 04:10:39.331105 3069 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 04:10:39.337265 kubelet[3069]: I0421 04:10:39.336064 3069 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 04:10:39.577769 kubelet[3069]: I0421 04:10:39.541524 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 04:10:39.577769 kubelet[3069]: I0421 04:10:39.542574 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:10:39.577769 kubelet[3069]: I0421 04:10:39.547505 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:10:39.577769 kubelet[3069]: I0421 04:10:39.550094 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a7d953910b62a73d737b747e25ca9b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a7d953910b62a73d737b747e25ca9b9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 04:10:39.577769 kubelet[3069]: I0421 04:10:39.558819 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a7d953910b62a73d737b747e25ca9b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a7d953910b62a73d737b747e25ca9b9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 04:10:39.623231 kubelet[3069]: I0421 04:10:39.622464 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a7d953910b62a73d737b747e25ca9b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a7d953910b62a73d737b747e25ca9b9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 04:10:39.814748 kubelet[3069]: I0421 04:10:39.639317 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:10:39.814748 kubelet[3069]: I0421 04:10:39.640015 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:10:39.814748 kubelet[3069]: I0421 04:10:39.640246 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 04:10:39.814748 kubelet[3069]: I0421 04:10:39.689998 3069 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 21 04:10:39.814748 kubelet[3069]: I0421 04:10:39.721291 3069 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 04:10:40.528249 kubelet[3069]: E0421 04:10:40.485622 3069 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 04:10:40.575185 kubelet[3069]: E0421 04:10:40.539052 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:40.575185 kubelet[3069]: E0421 04:10:40.543455 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:40.702295 kubelet[3069]: E0421 04:10:40.688188 3069 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 04:10:40.702295 kubelet[3069]: E0421 04:10:40.696345 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:42.160980 kubelet[3069]: E0421 04:10:42.160271 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:42.284875 kubelet[3069]: E0421 04:10:42.283874 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:42.473434 kubelet[3069]: E0421 04:10:42.472790 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:43.232156 kubelet[3069]: E0421 04:10:43.230229 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:43.430126 kubelet[3069]: E0421 04:10:43.250822 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:45.774499 kubelet[3069]: E0421 04:10:45.773002 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.164s" Apr 21 04:10:51.054346 kubelet[3069]: E0421 04:10:51.044491 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:51.109145 kubelet[3069]: E0421 04:10:51.068228 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:10:51.281458 kubelet[3069]: E0421 04:10:51.273472 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:11:03.546196 kubelet[3069]: I0421 04:11:03.541672 3069 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 04:11:03.776141 kubelet[3069]: E0421 04:11:03.768907 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.323s" Apr 21 04:11:03.789484 containerd[1580]: time="2026-04-21T04:11:03.787252498Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 04:11:03.860011 kubelet[3069]: I0421 04:11:03.856488 3069 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 04:11:06.022144 kubelet[3069]: E0421 04:11:06.020437 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.564s" Apr 21 04:11:07.525576 kubelet[3069]: E0421 04:11:07.519132 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.053s" Apr 21 04:11:10.352383 kubelet[3069]: I0421 04:11:10.249178 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bbdb88eb-32fc-4589-9f8b-0f1432954a8a-kube-proxy\") pod \"kube-proxy-w9v92\" (UID: \"bbdb88eb-32fc-4589-9f8b-0f1432954a8a\") " pod="kube-system/kube-proxy-w9v92" Apr 21 04:11:10.352383 kubelet[3069]: I0421 04:11:10.250499 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbdb88eb-32fc-4589-9f8b-0f1432954a8a-lib-modules\") pod \"kube-proxy-w9v92\" (UID: \"bbdb88eb-32fc-4589-9f8b-0f1432954a8a\") " pod="kube-system/kube-proxy-w9v92" Apr 21 04:11:10.352383 kubelet[3069]: I0421 04:11:10.250555 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbdb88eb-32fc-4589-9f8b-0f1432954a8a-xtables-lock\") pod \"kube-proxy-w9v92\" (UID: \"bbdb88eb-32fc-4589-9f8b-0f1432954a8a\") " pod="kube-system/kube-proxy-w9v92" Apr 21 04:11:10.352383 kubelet[3069]: I0421 04:11:10.252384 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwh5b\" (UniqueName: \"kubernetes.io/projected/bbdb88eb-32fc-4589-9f8b-0f1432954a8a-kube-api-access-nwh5b\") pod \"kube-proxy-w9v92\" (UID: \"bbdb88eb-32fc-4589-9f8b-0f1432954a8a\") " pod="kube-system/kube-proxy-w9v92" Apr 21 04:11:10.658653 systemd[1]: Created slice kubepods-besteffort-podbbdb88eb_32fc_4589_9f8b_0f1432954a8a.slice - libcontainer container kubepods-besteffort-podbbdb88eb_32fc_4589_9f8b_0f1432954a8a.slice. Apr 21 04:11:11.470950 kubelet[3069]: E0421 04:11:11.466286 3069 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 21 04:11:11.578916 kubelet[3069]: E0421 04:11:11.578126 3069 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bbdb88eb-32fc-4589-9f8b-0f1432954a8a-kube-proxy podName:bbdb88eb-32fc-4589-9f8b-0f1432954a8a nodeName:}" failed. No retries permitted until 2026-04-21 04:11:12.054244688 +0000 UTC m=+49.478955218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bbdb88eb-32fc-4589-9f8b-0f1432954a8a-kube-proxy") pod "kube-proxy-w9v92" (UID: "bbdb88eb-32fc-4589-9f8b-0f1432954a8a") : failed to sync configmap cache: timed out waiting for the condition Apr 21 04:11:12.351151 kubelet[3069]: I0421 04:11:12.350620 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cni-path\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.351151 kubelet[3069]: I0421 04:11:12.350978 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97fa60f8-356e-4d9e-8041-db7e5215b397-clustermesh-secrets\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.351151 kubelet[3069]: I0421 04:11:12.351089 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-host-proc-sys-kernel\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.351151 kubelet[3069]: I0421 04:11:12.351118 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97fa60f8-356e-4d9e-8041-db7e5215b397-hubble-tls\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.351151 kubelet[3069]: I0421 04:11:12.351137 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-run\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.351151 kubelet[3069]: I0421 04:11:12.351171 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-lib-modules\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.353110 kubelet[3069]: I0421 04:11:12.351284 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-bpf-maps\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.353110 kubelet[3069]: I0421 04:11:12.351303 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-hostproc\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.353110 kubelet[3069]: I0421 04:11:12.351320 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-etc-cni-netd\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.353110 kubelet[3069]: I0421 04:11:12.351337 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-xtables-lock\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.353110 kubelet[3069]: I0421 04:11:12.351511 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-config-path\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.353110 kubelet[3069]: I0421 04:11:12.352190 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-host-proc-sys-net\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.353289 kubelet[3069]: I0421 04:11:12.352215 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-cgroup\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:12.353289 kubelet[3069]: I0421 04:11:12.352246 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs8zj\" (UniqueName: \"kubernetes.io/projected/97fa60f8-356e-4d9e-8041-db7e5215b397-kube-api-access-qs8zj\") pod \"cilium-jsw7z\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " pod="kube-system/cilium-jsw7z" Apr 21 04:11:13.259139 systemd[1]: Created slice kubepods-burstable-pod97fa60f8_356e_4d9e_8041_db7e5215b397.slice - libcontainer container kubepods-burstable-pod97fa60f8_356e_4d9e_8041_db7e5215b397.slice. Apr 21 04:11:14.306310 kubelet[3069]: E0421 04:11:14.282525 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:11:14.498328 sudo[1830]: pam_unix(sudo:session): session closed for user root Apr 21 04:11:14.640184 sshd[1829]: Connection closed by 10.0.0.1 port 44304 Apr 21 04:11:14.693061 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Apr 21 04:11:14.722132 containerd[1580]: time="2026-04-21T04:11:14.693107500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9v92,Uid:bbdb88eb-32fc-4589-9f8b-0f1432954a8a,Namespace:kube-system,Attempt:0,}" Apr 21 04:11:15.190372 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:44304.service: Deactivated successfully. Apr 21 04:11:15.472924 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 04:11:15.479566 systemd[1]: session-9.scope: Consumed 34.822s CPU time, 277.9M memory peak. Apr 21 04:11:15.669938 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Apr 21 04:11:15.860533 kubelet[3069]: E0421 04:11:15.834497 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:11:15.990273 containerd[1580]: time="2026-04-21T04:11:15.982945493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jsw7z,Uid:97fa60f8-356e-4d9e-8041-db7e5215b397,Namespace:kube-system,Attempt:0,}" Apr 21 04:11:16.002891 systemd-logind[1564]: Removed session 9. Apr 21 04:11:16.595315 kubelet[3069]: E0421 04:11:16.587659 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.779s" Apr 21 04:11:16.988668 containerd[1580]: time="2026-04-21T04:11:16.911232139Z" level=info msg="connecting to shim dbe267893b1e2b36dbf8f1d3a3c15a6de43e8ea22f9fa8d1da603f0aac175057" address="unix:///run/containerd/s/935799c9edeef3552e97700a2eca08df318e522b7d6601087a83e05872939a4a" namespace=k8s.io protocol=ttrpc version=3 Apr 21 04:11:18.078107 kubelet[3069]: I0421 04:11:18.075885 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52wht\" (UniqueName: \"kubernetes.io/projected/5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77-kube-api-access-52wht\") pod \"cilium-operator-6c4d7847fc-pdlcd\" (UID: \"5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77\") " pod="kube-system/cilium-operator-6c4d7847fc-pdlcd" Apr 21 04:11:18.090068 kubelet[3069]: I0421 04:11:18.089425 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pdlcd\" (UID: \"5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77\") " pod="kube-system/cilium-operator-6c4d7847fc-pdlcd" Apr 21 04:11:18.480725 systemd[1]: Created slice kubepods-besteffort-pod5d5a89ab_a873_4aaf_b4d2_1ce3236c8c77.slice - libcontainer container kubepods-besteffort-pod5d5a89ab_a873_4aaf_b4d2_1ce3236c8c77.slice. Apr 21 04:11:18.567482 containerd[1580]: time="2026-04-21T04:11:18.497132786Z" level=info msg="connecting to shim 6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11" address="unix:///run/containerd/s/6441ba6e02fe369efa68012eb39d2cfca5866b03fe7283870461e6419d2ab180" namespace=k8s.io protocol=ttrpc version=3 Apr 21 04:11:19.633618 systemd[1]: Started cri-containerd-dbe267893b1e2b36dbf8f1d3a3c15a6de43e8ea22f9fa8d1da603f0aac175057.scope - libcontainer container dbe267893b1e2b36dbf8f1d3a3c15a6de43e8ea22f9fa8d1da603f0aac175057. Apr 21 04:11:21.259784 kubelet[3069]: E0421 04:11:21.254396 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.793s" Apr 21 04:11:22.967928 systemd[1]: Started cri-containerd-6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11.scope - libcontainer container 6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11. Apr 21 04:11:23.325449 kubelet[3069]: E0421 04:11:23.045608 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:11:23.513083 containerd[1580]: time="2026-04-21T04:11:23.309609681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pdlcd,Uid:5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77,Namespace:kube-system,Attempt:0,}" Apr 21 04:11:24.001399 kubelet[3069]: E0421 04:11:24.000934 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.543s" Apr 21 04:11:24.221684 containerd[1580]: time="2026-04-21T04:11:24.219939329Z" level=error msg="get state for dbe267893b1e2b36dbf8f1d3a3c15a6de43e8ea22f9fa8d1da603f0aac175057" error="context deadline exceeded" Apr 21 04:11:24.297878 containerd[1580]: time="2026-04-21T04:11:24.287920559Z" level=warning msg="unknown status" status=0 Apr 21 04:11:24.537138 containerd[1580]: time="2026-04-21T04:11:24.529471380Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 04:11:26.151600 containerd[1580]: time="2026-04-21T04:11:26.147450024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9v92,Uid:bbdb88eb-32fc-4589-9f8b-0f1432954a8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbe267893b1e2b36dbf8f1d3a3c15a6de43e8ea22f9fa8d1da603f0aac175057\"" Apr 21 04:11:26.371857 containerd[1580]: time="2026-04-21T04:11:26.368480012Z" level=info msg="connecting to shim 0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe" address="unix:///run/containerd/s/f6cf251f02f334059a4d95c00fdebda70f6d85441b0199ace8dc6223a9d4d53a" namespace=k8s.io protocol=ttrpc version=3 Apr 21 04:11:26.904145 kubelet[3069]: E0421 04:11:26.877336 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:11:27.773493 kubelet[3069]: E0421 04:11:27.771348 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.315s" Apr 21 04:11:28.056523 containerd[1580]: time="2026-04-21T04:11:27.966097907Z" level=info msg="CreateContainer within sandbox \"dbe267893b1e2b36dbf8f1d3a3c15a6de43e8ea22f9fa8d1da603f0aac175057\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 04:11:28.436330 containerd[1580]: time="2026-04-21T04:11:28.388513601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jsw7z,Uid:97fa60f8-356e-4d9e-8041-db7e5215b397,Namespace:kube-system,Attempt:0,} returns sandbox id \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\"" Apr 21 04:11:28.594601 kubelet[3069]: E0421 04:11:28.590657 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:11:28.597461 systemd[1]: Started cri-containerd-0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe.scope - libcontainer container 0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe. Apr 21 04:11:28.812274 containerd[1580]: time="2026-04-21T04:11:28.782618608Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 21 04:11:29.036763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479416417.mount: Deactivated successfully. Apr 21 04:11:29.037450 containerd[1580]: time="2026-04-21T04:11:29.036953111Z" level=info msg="Container f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:11:29.137974 containerd[1580]: time="2026-04-21T04:11:29.136303744Z" level=info msg="CreateContainer within sandbox \"dbe267893b1e2b36dbf8f1d3a3c15a6de43e8ea22f9fa8d1da603f0aac175057\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875\"" Apr 21 04:11:29.176319 containerd[1580]: time="2026-04-21T04:11:29.168461538Z" level=info msg="StartContainer for \"f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875\"" Apr 21 04:11:29.470316 containerd[1580]: time="2026-04-21T04:11:29.466262797Z" level=info msg="connecting to shim f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875" address="unix:///run/containerd/s/935799c9edeef3552e97700a2eca08df318e522b7d6601087a83e05872939a4a" protocol=ttrpc version=3 Apr 21 04:11:30.174521 systemd[1]: Started cri-containerd-f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875.scope - libcontainer container f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875. Apr 21 04:11:30.824149 containerd[1580]: time="2026-04-21T04:11:30.822573456Z" level=error msg="get state for 0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe" error="context deadline exceeded" Apr 21 04:11:30.824149 containerd[1580]: time="2026-04-21T04:11:30.823083291Z" level=warning msg="unknown status" status=0 Apr 21 04:11:31.685194 containerd[1580]: time="2026-04-21T04:11:31.680203453Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 04:11:31.988899 kubelet[3069]: E0421 04:11:31.986232 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.52s" Apr 21 04:11:32.624855 containerd[1580]: time="2026-04-21T04:11:32.622357132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pdlcd,Uid:5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\"" Apr 21 04:11:32.977619 kubelet[3069]: E0421 04:11:32.873458 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:11:34.378632 containerd[1580]: time="2026-04-21T04:11:34.377424787Z" level=error msg="get state for f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875" error="context deadline exceeded" Apr 21 04:11:34.468164 containerd[1580]: time="2026-04-21T04:11:34.438130147Z" level=warning msg="unknown status" status=0 Apr 21 04:11:35.533683 kubelet[3069]: E0421 04:11:35.529025 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.004s" Apr 21 04:11:36.762467 containerd[1580]: time="2026-04-21T04:11:36.719071009Z" level=error msg="get state for f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875" error="context deadline exceeded" Apr 21 04:11:36.856116 containerd[1580]: time="2026-04-21T04:11:36.771975810Z" level=warning msg="unknown status" status=0 Apr 21 04:11:37.461820 update_engine[1573]: I20260421 04:11:37.434505 1573 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 21 04:11:37.461820 update_engine[1573]: I20260421 04:11:37.436206 1573 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 21 04:11:37.599303 update_engine[1573]: I20260421 04:11:37.484483 1573 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 21 04:11:37.653603 update_engine[1573]: I20260421 04:11:37.646550 1573 omaha_request_params.cc:62] Current group set to stable Apr 21 04:11:37.678116 update_engine[1573]: I20260421 04:11:37.662125 1573 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 21 04:11:37.678116 update_engine[1573]: I20260421 04:11:37.665318 1573 update_attempter.cc:643] Scheduling an action processor start. Apr 21 04:11:37.678116 update_engine[1573]: I20260421 04:11:37.665686 1573 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 21 04:11:37.678116 update_engine[1573]: I20260421 04:11:37.668958 1573 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 21 04:11:37.779090 update_engine[1573]: I20260421 04:11:37.674348 1573 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 21 04:11:37.779090 update_engine[1573]: I20260421 04:11:37.682143 1573 omaha_request_action.cc:272] Request: Apr 21 04:11:37.779090 update_engine[1573]: Apr 21 04:11:37.779090 update_engine[1573]: Apr 21 04:11:37.779090 update_engine[1573]: Apr 21 04:11:37.779090 update_engine[1573]: Apr 21 04:11:37.779090 update_engine[1573]: Apr 21 04:11:37.779090 update_engine[1573]: Apr 21 04:11:37.779090 update_engine[1573]: Apr 21 04:11:37.779090 update_engine[1573]: Apr 21 04:11:37.779090 update_engine[1573]: I20260421 04:11:37.684144 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 04:11:37.825937 update_engine[1573]: I20260421 04:11:37.823549 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 04:11:37.840401 update_engine[1573]: I20260421 04:11:37.839675 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 04:11:37.866807 update_engine[1573]: E20260421 04:11:37.865551 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 04:11:37.882668 update_engine[1573]: I20260421 04:11:37.866616 1573 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 21 04:11:37.886261 containerd[1580]: time="2026-04-21T04:11:37.885260313Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 04:11:37.886261 containerd[1580]: time="2026-04-21T04:11:37.885771668Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 21 04:11:38.118394 locksmithd[1665]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 21 04:11:41.317411 containerd[1580]: time="2026-04-21T04:11:41.317178399Z" level=info msg="StartContainer for \"f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875\" returns successfully" Apr 21 04:11:42.376268 kubelet[3069]: E0421 04:11:42.372659 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.772s" Apr 21 04:11:44.344253 kubelet[3069]: E0421 04:11:44.343355 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:11:45.370496 kubelet[3069]: E0421 04:11:45.369537 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:11:48.471430 update_engine[1573]: I20260421 04:11:48.437569 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 04:11:48.471430 update_engine[1573]: I20260421 04:11:48.455319 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 04:11:48.665959 update_engine[1573]: I20260421 04:11:48.601548 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 04:11:48.678959 update_engine[1573]: E20260421 04:11:48.665896 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 04:11:48.747963 update_engine[1573]: I20260421 04:11:48.724664 1573 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 21 04:11:49.558514 kubelet[3069]: E0421 04:11:49.556311 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.116s" Apr 21 04:11:52.121463 kubelet[3069]: E0421 04:11:52.113506 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.55s" Apr 21 04:11:55.725940 kubelet[3069]: E0421 04:11:55.697132 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.07s" Apr 21 04:11:59.520571 update_engine[1573]: I20260421 04:11:59.516204 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 04:11:59.520571 update_engine[1573]: I20260421 04:11:59.522173 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 04:11:59.649972 update_engine[1573]: I20260421 04:11:59.640290 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 04:11:59.662783 update_engine[1573]: E20260421 04:11:59.651751 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 04:11:59.718001 update_engine[1573]: I20260421 04:11:59.682487 1573 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 21 04:12:00.213431 kubelet[3069]: E0421 04:12:00.211341 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.512s" Apr 21 04:12:06.016112 kubelet[3069]: E0421 04:12:06.015344 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.779s" Apr 21 04:12:08.678652 kubelet[3069]: E0421 04:12:08.574050 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:12:08.678652 kubelet[3069]: E0421 04:12:08.574482 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:12:10.418676 update_engine[1573]: I20260421 04:12:10.416018 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 04:12:10.424961 update_engine[1573]: I20260421 04:12:10.422673 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 04:12:10.444016 update_engine[1573]: I20260421 04:12:10.442680 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 04:12:10.484213 update_engine[1573]: E20260421 04:12:10.479418 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 04:12:10.484213 update_engine[1573]: I20260421 04:12:10.481767 1573 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 21 04:12:10.484213 update_engine[1573]: I20260421 04:12:10.482144 1573 omaha_request_action.cc:617] Omaha request response: Apr 21 04:12:10.544559 update_engine[1573]: E20260421 04:12:10.488204 1573 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 21 04:12:10.558062 update_engine[1573]: I20260421 04:12:10.554351 1573 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 21 04:12:10.558062 update_engine[1573]: I20260421 04:12:10.554812 1573 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 21 04:12:10.558062 update_engine[1573]: I20260421 04:12:10.554824 1573 update_attempter.cc:306] Processing Done. Apr 21 04:12:10.564154 update_engine[1573]: E20260421 04:12:10.557425 1573 update_attempter.cc:619] Update failed. Apr 21 04:12:10.564154 update_engine[1573]: I20260421 04:12:10.561412 1573 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 21 04:12:10.564154 update_engine[1573]: I20260421 04:12:10.561949 1573 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 21 04:12:10.564154 update_engine[1573]: I20260421 04:12:10.561975 1573 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 21 04:12:10.573685 update_engine[1573]: I20260421 04:12:10.567315 1573 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 21 04:12:10.573685 update_engine[1573]: I20260421 04:12:10.568392 1573 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 21 04:12:10.573685 update_engine[1573]: I20260421 04:12:10.568639 1573 omaha_request_action.cc:272] Request: Apr 21 04:12:10.573685 update_engine[1573]: Apr 21 04:12:10.573685 update_engine[1573]: Apr 21 04:12:10.573685 update_engine[1573]: Apr 21 04:12:10.573685 update_engine[1573]: Apr 21 04:12:10.573685 update_engine[1573]: Apr 21 04:12:10.573685 update_engine[1573]: Apr 21 04:12:10.573685 update_engine[1573]: I20260421 04:12:10.570098 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 04:12:10.573685 update_engine[1573]: I20260421 04:12:10.570479 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 04:12:10.582051 update_engine[1573]: I20260421 04:12:10.578037 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 04:12:10.585136 locksmithd[1665]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 21 04:12:10.610454 update_engine[1573]: E20260421 04:12:10.593603 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 04:12:10.610454 update_engine[1573]: I20260421 04:12:10.594198 1573 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 21 04:12:10.610454 update_engine[1573]: I20260421 04:12:10.594210 1573 omaha_request_action.cc:617] Omaha request response: Apr 21 04:12:10.610454 update_engine[1573]: I20260421 04:12:10.594225 1573 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 21 04:12:10.610454 update_engine[1573]: I20260421 04:12:10.594231 1573 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 21 04:12:10.610454 update_engine[1573]: I20260421 04:12:10.594242 1573 update_attempter.cc:306] Processing Done. Apr 21 04:12:10.610454 update_engine[1573]: I20260421 04:12:10.594255 1573 update_attempter.cc:310] Error event sent. Apr 21 04:12:10.610454 update_engine[1573]: I20260421 04:12:10.594348 1573 update_check_scheduler.cc:74] Next update check in 49m34s Apr 21 04:12:10.801541 locksmithd[1665]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 21 04:12:16.633442 kubelet[3069]: E0421 04:12:16.631255 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:12:28.800363 kubelet[3069]: E0421 04:12:28.797802 3069 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 21 04:12:31.230208 kubelet[3069]: E0421 04:12:31.225535 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:12:36.351009 kubelet[3069]: E0421 04:12:36.339083 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:12:36.552202 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:51702.service - OpenSSH per-connection server daemon (10.0.0.1:51702). Apr 21 04:12:37.514725 kubelet[3069]: E0421 04:12:37.512032 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.064s" Apr 21 04:12:42.137635 kubelet[3069]: E0421 04:12:42.136957 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:12:42.585128 kubelet[3069]: E0421 04:12:42.576142 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.025s" Apr 21 04:12:43.064866 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 51702 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:12:43.386314 sshd-session[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:12:44.279838 systemd-logind[1564]: New session 10 of user core. Apr 21 04:12:44.354743 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 04:12:47.195136 kubelet[3069]: E0421 04:12:47.185840 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:12:47.683222 sshd[3550]: Connection closed by 10.0.0.1 port 51702 Apr 21 04:12:47.704994 sshd-session[3547]: pam_unix(sshd:session): session closed for user core Apr 21 04:12:47.975308 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:51702.service: Deactivated successfully. Apr 21 04:12:48.061886 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 04:12:48.071517 systemd[1]: session-10.scope: Consumed 1.084s CPU time, 15.6M memory peak. Apr 21 04:12:48.098606 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Apr 21 04:12:48.149962 systemd-logind[1564]: Removed session 10. Apr 21 04:12:51.207090 kubelet[3069]: E0421 04:12:51.206183 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:12:52.222373 kubelet[3069]: E0421 04:12:52.217462 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:12:52.844094 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:58654.service - OpenSSH per-connection server daemon (10.0.0.1:58654). Apr 21 04:12:53.704902 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 58654 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:12:53.742602 sshd-session[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:12:54.072099 systemd-logind[1564]: New session 11 of user core. Apr 21 04:12:54.156379 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 04:12:56.176265 sshd[3575]: Connection closed by 10.0.0.1 port 58654 Apr 21 04:12:56.222485 sshd-session[3572]: pam_unix(sshd:session): session closed for user core Apr 21 04:12:56.700627 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:58654.service: Deactivated successfully. Apr 21 04:12:56.968133 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 04:12:57.071608 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Apr 21 04:12:57.205306 systemd-logind[1564]: Removed session 11. Apr 21 04:12:57.294621 kubelet[3069]: E0421 04:12:57.282553 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:01.459209 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:33192.service - OpenSSH per-connection server daemon (10.0.0.1:33192). Apr 21 04:13:02.329825 kubelet[3069]: E0421 04:13:02.326998 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:02.998894 sshd[3596]: Accepted publickey for core from 10.0.0.1 port 33192 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:13:03.080855 sshd-session[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:13:03.317815 systemd-logind[1564]: New session 12 of user core. Apr 21 04:13:03.368767 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 04:13:05.544280 sshd[3599]: Connection closed by 10.0.0.1 port 33192 Apr 21 04:13:05.556854 sshd-session[3596]: pam_unix(sshd:session): session closed for user core Apr 21 04:13:05.856123 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:33192.service: Deactivated successfully. Apr 21 04:13:05.888305 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 04:13:05.938187 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Apr 21 04:13:06.018155 systemd-logind[1564]: Removed session 12. Apr 21 04:13:07.410304 kubelet[3069]: E0421 04:13:07.380422 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:10.937579 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:59198.service - OpenSSH per-connection server daemon (10.0.0.1:59198). Apr 21 04:13:12.481880 kubelet[3069]: E0421 04:13:12.480288 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:13.254803 sshd[3616]: Accepted publickey for core from 10.0.0.1 port 59198 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:13:13.271645 sshd-session[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:13:13.568981 systemd-logind[1564]: New session 13 of user core. Apr 21 04:13:13.626359 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 04:13:14.549245 kubelet[3069]: E0421 04:13:14.513284 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:13:16.031878 sshd[3619]: Connection closed by 10.0.0.1 port 59198 Apr 21 04:13:16.057556 sshd-session[3616]: pam_unix(sshd:session): session closed for user core Apr 21 04:13:16.376019 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:59198.service: Deactivated successfully. Apr 21 04:13:16.459874 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 04:13:16.502421 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Apr 21 04:13:16.763409 systemd-logind[1564]: Removed session 13. Apr 21 04:13:17.569923 kubelet[3069]: E0421 04:13:17.563983 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:21.337292 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:54890.service - OpenSSH per-connection server daemon (10.0.0.1:54890). Apr 21 04:13:22.246034 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 54890 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:13:22.287800 sshd-session[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:13:22.385453 systemd-logind[1564]: New session 14 of user core. Apr 21 04:13:22.432834 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 04:13:22.466363 kubelet[3069]: E0421 04:13:22.463232 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:13:22.714766 kubelet[3069]: E0421 04:13:22.710041 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:23.479347 kubelet[3069]: E0421 04:13:23.477602 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:13:24.420181 sshd[3636]: Connection closed by 10.0.0.1 port 54890 Apr 21 04:13:24.423508 sshd-session[3633]: pam_unix(sshd:session): session closed for user core Apr 21 04:13:24.630531 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:54890.service: Deactivated successfully. Apr 21 04:13:24.699911 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 04:13:24.749482 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Apr 21 04:13:24.780323 systemd-logind[1564]: Removed session 14. Apr 21 04:13:25.611414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92166395.mount: Deactivated successfully. Apr 21 04:13:27.758904 kubelet[3069]: E0421 04:13:27.750356 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:29.691439 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:49606.service - OpenSSH per-connection server daemon (10.0.0.1:49606). Apr 21 04:13:30.374524 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 49606 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:13:30.400580 sshd-session[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:13:30.579967 systemd-logind[1564]: New session 15 of user core. Apr 21 04:13:30.648324 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 04:13:32.180399 sshd[3666]: Connection closed by 10.0.0.1 port 49606 Apr 21 04:13:32.201672 sshd-session[3662]: pam_unix(sshd:session): session closed for user core Apr 21 04:13:32.340818 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:49606.service: Deactivated successfully. Apr 21 04:13:32.497097 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 04:13:32.542263 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Apr 21 04:13:32.587652 systemd-logind[1564]: Removed session 15. Apr 21 04:13:32.862468 kubelet[3069]: E0421 04:13:32.853648 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:37.364899 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:35030.service - OpenSSH per-connection server daemon (10.0.0.1:35030). Apr 21 04:13:37.913323 kubelet[3069]: E0421 04:13:37.904325 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:38.159503 sshd[3696]: Accepted publickey for core from 10.0.0.1 port 35030 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:13:38.167738 sshd-session[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:13:38.225638 containerd[1580]: time="2026-04-21T04:13:38.223470263Z" level=warning msg="container event discarded" container=eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108 type=CONTAINER_CREATED_EVENT Apr 21 04:13:38.225638 containerd[1580]: time="2026-04-21T04:13:38.224000840Z" level=warning msg="container event discarded" container=eda0d80aeb4d45e7ec4afe1fdf26d13e60452382956c84bda7bce6403105e108 type=CONTAINER_STARTED_EVENT Apr 21 04:13:38.268337 systemd-logind[1564]: New session 16 of user core. Apr 21 04:13:38.343923 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 04:13:38.411887 containerd[1580]: time="2026-04-21T04:13:38.397150533Z" level=warning msg="container event discarded" container=9a740555c7ef8d6679abb2c0d5acaafacba6b6c0c01e5f9a1f5b080d25dc5d92 type=CONTAINER_CREATED_EVENT Apr 21 04:13:38.411887 containerd[1580]: time="2026-04-21T04:13:38.411036012Z" level=warning msg="container event discarded" container=9a740555c7ef8d6679abb2c0d5acaafacba6b6c0c01e5f9a1f5b080d25dc5d92 type=CONTAINER_STARTED_EVENT Apr 21 04:13:38.492855 containerd[1580]: time="2026-04-21T04:13:38.469609474Z" level=warning msg="container event discarded" container=798c3643bb0629cfc9f6aa72b4b3a8b90d21784e4f8460b5fe4231058789e52f type=CONTAINER_CREATED_EVENT Apr 21 04:13:38.494848 containerd[1580]: time="2026-04-21T04:13:38.494542034Z" level=warning msg="container event discarded" container=798c3643bb0629cfc9f6aa72b4b3a8b90d21784e4f8460b5fe4231058789e52f type=CONTAINER_STARTED_EVENT Apr 21 04:13:39.166632 containerd[1580]: time="2026-04-21T04:13:39.162802481Z" level=warning msg="container event discarded" container=5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31 type=CONTAINER_CREATED_EVENT Apr 21 04:13:39.211848 containerd[1580]: time="2026-04-21T04:13:39.209848622Z" level=warning msg="container event discarded" container=7c3a9835deaf8b342eebeb8d733d83f4b28a8583ffcaa392569e30730e8ac3d2 type=CONTAINER_CREATED_EVENT Apr 21 04:13:39.334981 containerd[1580]: time="2026-04-21T04:13:39.333777731Z" level=warning msg="container event discarded" container=232ffdd5eb036942ee4767504f269bb4e895d8d248fe1fe0ed1e953b8b36d15b type=CONTAINER_CREATED_EVENT Apr 21 04:13:40.150207 sshd[3699]: Connection closed by 10.0.0.1 port 35030 Apr 21 04:13:40.162212 sshd-session[3696]: pam_unix(sshd:session): session closed for user core Apr 21 04:13:40.372436 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:35030.service: Deactivated successfully. Apr 21 04:13:40.403225 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 04:13:40.664861 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Apr 21 04:13:40.680664 systemd-logind[1564]: Removed session 16. Apr 21 04:13:41.738498 containerd[1580]: time="2026-04-21T04:13:41.737058724Z" level=warning msg="container event discarded" container=232ffdd5eb036942ee4767504f269bb4e895d8d248fe1fe0ed1e953b8b36d15b type=CONTAINER_STARTED_EVENT Apr 21 04:13:41.816236 containerd[1580]: time="2026-04-21T04:13:41.813683375Z" level=warning msg="container event discarded" container=5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31 type=CONTAINER_STARTED_EVENT Apr 21 04:13:42.053819 containerd[1580]: time="2026-04-21T04:13:42.050140502Z" level=warning msg="container event discarded" container=7c3a9835deaf8b342eebeb8d733d83f4b28a8583ffcaa392569e30730e8ac3d2 type=CONTAINER_STARTED_EVENT Apr 21 04:13:42.949471 kubelet[3069]: E0421 04:13:42.945338 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:45.397389 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:48758.service - OpenSSH per-connection server daemon (10.0.0.1:48758). Apr 21 04:13:46.915680 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 48758 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:13:46.950645 sshd-session[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:13:47.356303 systemd-logind[1564]: New session 17 of user core. Apr 21 04:13:47.373753 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 04:13:47.979276 kubelet[3069]: E0421 04:13:47.977604 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:48.430415 sshd[3717]: Connection closed by 10.0.0.1 port 48758 Apr 21 04:13:48.432896 sshd-session[3714]: pam_unix(sshd:session): session closed for user core Apr 21 04:13:48.511162 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:48758.service: Deactivated successfully. Apr 21 04:13:48.528823 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 04:13:48.558818 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Apr 21 04:13:48.604640 systemd-logind[1564]: Removed session 17. Apr 21 04:13:53.029285 kubelet[3069]: E0421 04:13:53.027724 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:13:53.668774 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:48760.service - OpenSSH per-connection server daemon (10.0.0.1:48760). Apr 21 04:13:54.575414 sshd[3734]: Accepted publickey for core from 10.0.0.1 port 48760 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:13:54.585246 sshd-session[3734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:13:54.677502 systemd-logind[1564]: New session 18 of user core. Apr 21 04:13:54.709644 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 04:13:55.593712 containerd[1580]: time="2026-04-21T04:13:55.592495516Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:13:55.606885 containerd[1580]: time="2026-04-21T04:13:55.606818482Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 21 04:13:55.622547 containerd[1580]: time="2026-04-21T04:13:55.616376882Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:13:55.657053 containerd[1580]: time="2026-04-21T04:13:55.655117603Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 2m26.852034471s" Apr 21 04:13:55.657053 containerd[1580]: time="2026-04-21T04:13:55.655466355Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 21 04:13:55.790090 containerd[1580]: time="2026-04-21T04:13:55.785531436Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 21 04:13:56.004826 containerd[1580]: time="2026-04-21T04:13:56.001482745Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 04:13:56.471134 sshd[3737]: Connection closed by 10.0.0.1 port 48760 Apr 21 04:13:56.499911 sshd-session[3734]: pam_unix(sshd:session): session closed for user core Apr 21 04:13:56.531551 containerd[1580]: time="2026-04-21T04:13:56.530160371Z" level=info msg="Container 0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:13:56.576993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488197048.mount: Deactivated successfully. Apr 21 04:13:56.674255 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:48760.service: Deactivated successfully. Apr 21 04:13:56.748828 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 04:13:56.822208 containerd[1580]: time="2026-04-21T04:13:56.821454540Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\"" Apr 21 04:13:56.860636 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Apr 21 04:13:56.878134 containerd[1580]: time="2026-04-21T04:13:56.878074683Z" level=info msg="StartContainer for \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\"" Apr 21 04:13:56.879983 systemd-logind[1564]: Removed session 18. Apr 21 04:13:56.912311 containerd[1580]: time="2026-04-21T04:13:56.911278414Z" level=info msg="connecting to shim 0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457" address="unix:///run/containerd/s/6441ba6e02fe369efa68012eb39d2cfca5866b03fe7283870461e6419d2ab180" protocol=ttrpc version=3 Apr 21 04:13:58.203150 systemd[1]: Started cri-containerd-0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457.scope - libcontainer container 0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457. Apr 21 04:13:58.227339 kubelet[3069]: E0421 04:13:58.221779 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:00.712785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93023391.mount: Deactivated successfully. Apr 21 04:14:00.726385 systemd[1]: cri-containerd-0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457.scope: Deactivated successfully. Apr 21 04:14:00.750778 containerd[1580]: time="2026-04-21T04:14:00.750393936Z" level=info msg="StartContainer for \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\" returns successfully" Apr 21 04:14:00.919437 containerd[1580]: time="2026-04-21T04:14:00.919204289Z" level=info msg="received container exit event container_id:\"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\" id:\"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\" pid:3768 exited_at:{seconds:1776744840 nanos:905524103}" Apr 21 04:14:01.785550 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:59348.service - OpenSSH per-connection server daemon (10.0.0.1:59348). Apr 21 04:14:02.168275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457-rootfs.mount: Deactivated successfully. Apr 21 04:14:02.377883 kubelet[3069]: E0421 04:14:02.370509 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:03.281084 kubelet[3069]: E0421 04:14:03.280215 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:03.379346 sshd[3809]: Accepted publickey for core from 10.0.0.1 port 59348 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:14:03.487932 kubelet[3069]: I0421 04:14:03.484094 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w9v92" podStartSLOduration=175.483682342 podStartE2EDuration="2m55.483682342s" podCreationTimestamp="2026-04-21 04:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 04:11:45.164597182 +0000 UTC m=+82.589307695" watchObservedRunningTime="2026-04-21 04:14:03.483682342 +0000 UTC m=+220.908392839" Apr 21 04:14:03.606747 sshd-session[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:14:03.846070 systemd-logind[1564]: New session 19 of user core. Apr 21 04:14:03.861154 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 04:14:03.950637 kubelet[3069]: E0421 04:14:03.946786 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:04.308471 containerd[1580]: time="2026-04-21T04:14:04.305149417Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 04:14:05.590666 containerd[1580]: time="2026-04-21T04:14:05.581639204Z" level=info msg="Container 1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:14:05.663036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544751896.mount: Deactivated successfully. Apr 21 04:14:05.761404 kubelet[3069]: E0421 04:14:05.758110 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.3s" Apr 21 04:14:05.852672 kubelet[3069]: E0421 04:14:05.848484 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:06.257248 containerd[1580]: time="2026-04-21T04:14:06.251354130Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\"" Apr 21 04:14:06.386855 containerd[1580]: time="2026-04-21T04:14:06.386374207Z" level=info msg="StartContainer for \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\"" Apr 21 04:14:06.440668 containerd[1580]: time="2026-04-21T04:14:06.439051661Z" level=info msg="connecting to shim 1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4" address="unix:///run/containerd/s/6441ba6e02fe369efa68012eb39d2cfca5866b03fe7283870461e6419d2ab180" protocol=ttrpc version=3 Apr 21 04:14:08.019178 sshd[3816]: Connection closed by 10.0.0.1 port 59348 Apr 21 04:14:08.041146 sshd-session[3809]: pam_unix(sshd:session): session closed for user core Apr 21 04:14:08.269139 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:59348.service: Deactivated successfully. Apr 21 04:14:08.420679 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 04:14:08.428149 kubelet[3069]: E0421 04:14:08.423281 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:08.421847 systemd[1]: session-19.scope: Consumed 1.398s CPU time, 14.4M memory peak. Apr 21 04:14:08.495451 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Apr 21 04:14:08.512937 systemd[1]: Started cri-containerd-1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4.scope - libcontainer container 1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4. Apr 21 04:14:08.520459 systemd-logind[1564]: Removed session 19. Apr 21 04:14:09.131870 containerd[1580]: time="2026-04-21T04:14:09.131622577Z" level=info msg="StartContainer for \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\" returns successfully" Apr 21 04:14:09.572909 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 04:14:09.584654 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 04:14:09.611041 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 21 04:14:09.656030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 04:14:09.677010 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 21 04:14:09.729588 systemd[1]: cri-containerd-1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4.scope: Deactivated successfully. Apr 21 04:14:09.960340 containerd[1580]: time="2026-04-21T04:14:09.884647525Z" level=info msg="received container exit event container_id:\"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\" id:\"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\" pid:3846 exited_at:{seconds:1776744849 nanos:879373202}" Apr 21 04:14:10.143998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 04:14:10.230171 kubelet[3069]: E0421 04:14:10.222052 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:10.813065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4-rootfs.mount: Deactivated successfully. Apr 21 04:14:11.374997 kubelet[3069]: E0421 04:14:11.370109 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:11.459117 containerd[1580]: time="2026-04-21T04:14:11.448970871Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 04:14:11.709287 containerd[1580]: time="2026-04-21T04:14:11.705293095Z" level=info msg="Container 8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:14:11.828970 containerd[1580]: time="2026-04-21T04:14:11.828679322Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\"" Apr 21 04:14:11.869623 containerd[1580]: time="2026-04-21T04:14:11.869142989Z" level=info msg="StartContainer for \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\"" Apr 21 04:14:11.988227 containerd[1580]: time="2026-04-21T04:14:11.976818743Z" level=info msg="connecting to shim 8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491" address="unix:///run/containerd/s/6441ba6e02fe369efa68012eb39d2cfca5866b03fe7283870461e6419d2ab180" protocol=ttrpc version=3 Apr 21 04:14:12.974053 systemd[1]: Started cri-containerd-8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491.scope - libcontainer container 8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491. Apr 21 04:14:13.149099 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:55548.service - OpenSSH per-connection server daemon (10.0.0.1:55548). Apr 21 04:14:13.550954 kubelet[3069]: E0421 04:14:13.534164 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:13.987138 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 55548 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:14:14.018195 sshd-session[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:14:14.046245 systemd[1]: cri-containerd-8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491.scope: Deactivated successfully. Apr 21 04:14:14.086506 containerd[1580]: time="2026-04-21T04:14:14.085993322Z" level=info msg="received container exit event container_id:\"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\" id:\"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\" pid:3897 exited_at:{seconds:1776744854 nanos:78058288}" Apr 21 04:14:14.105213 systemd-logind[1564]: New session 20 of user core. Apr 21 04:14:14.149023 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 04:14:14.348342 containerd[1580]: time="2026-04-21T04:14:14.346160477Z" level=info msg="StartContainer for \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\" returns successfully" Apr 21 04:14:15.004935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491-rootfs.mount: Deactivated successfully. Apr 21 04:14:15.614834 kubelet[3069]: E0421 04:14:15.614048 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:15.902240 sshd[3922]: Connection closed by 10.0.0.1 port 55548 Apr 21 04:14:15.944393 containerd[1580]: time="2026-04-21T04:14:15.911225613Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 04:14:15.920308 sshd-session[3904]: pam_unix(sshd:session): session closed for user core Apr 21 04:14:16.169474 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:55548.service: Deactivated successfully. Apr 21 04:14:16.203897 containerd[1580]: time="2026-04-21T04:14:16.197165657Z" level=info msg="Container dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:14:16.219957 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 04:14:16.261347 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Apr 21 04:14:16.290098 systemd-logind[1564]: Removed session 20. Apr 21 04:14:16.456075 containerd[1580]: time="2026-04-21T04:14:16.441796624Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\"" Apr 21 04:14:16.559597 containerd[1580]: time="2026-04-21T04:14:16.557298253Z" level=info msg="StartContainer for \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\"" Apr 21 04:14:16.756943 containerd[1580]: time="2026-04-21T04:14:16.672434444Z" level=info msg="connecting to shim dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b" address="unix:///run/containerd/s/6441ba6e02fe369efa68012eb39d2cfca5866b03fe7283870461e6419d2ab180" protocol=ttrpc version=3 Apr 21 04:14:17.292056 systemd[1]: Started cri-containerd-dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b.scope - libcontainer container dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b. Apr 21 04:14:18.356777 containerd[1580]: time="2026-04-21T04:14:18.356173613Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:14:18.383984 containerd[1580]: time="2026-04-21T04:14:18.381966412Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 21 04:14:18.608050 systemd[1]: cri-containerd-dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b.scope: Deactivated successfully. Apr 21 04:14:18.738235 containerd[1580]: time="2026-04-21T04:14:18.725287413Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 04:14:18.741272 containerd[1580]: time="2026-04-21T04:14:18.739637496Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97fa60f8_356e_4d9e_8041_db7e5215b397.slice/cri-containerd-dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b.scope/memory.events\": no such file or directory" Apr 21 04:14:18.777774 containerd[1580]: time="2026-04-21T04:14:18.776275436Z" level=info msg="received container exit event container_id:\"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\" id:\"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\" pid:3953 exited_at:{seconds:1776744858 nanos:734490961}" Apr 21 04:14:18.779247 kubelet[3069]: E0421 04:14:18.742957 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:18.836076 containerd[1580]: time="2026-04-21T04:14:18.782665482Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 22.996017433s" Apr 21 04:14:18.836076 containerd[1580]: time="2026-04-21T04:14:18.782971654Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 21 04:14:18.836076 containerd[1580]: time="2026-04-21T04:14:18.827988306Z" level=info msg="StartContainer for \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\" returns successfully" Apr 21 04:14:18.902338 containerd[1580]: time="2026-04-21T04:14:18.899534250Z" level=info msg="CreateContainer within sandbox \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 04:14:19.247318 containerd[1580]: time="2026-04-21T04:14:19.182547629Z" level=info msg="Container 281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:14:19.555339 containerd[1580]: time="2026-04-21T04:14:19.554795503Z" level=info msg="CreateContainer within sandbox \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\"" Apr 21 04:14:19.582244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b-rootfs.mount: Deactivated successfully. Apr 21 04:14:19.612320 containerd[1580]: time="2026-04-21T04:14:19.607128418Z" level=info msg="StartContainer for \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\"" Apr 21 04:14:19.737774 containerd[1580]: time="2026-04-21T04:14:19.736939088Z" level=info msg="connecting to shim 281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3" address="unix:///run/containerd/s/f6cf251f02f334059a4d95c00fdebda70f6d85441b0199ace8dc6223a9d4d53a" protocol=ttrpc version=3 Apr 21 04:14:20.066118 kubelet[3069]: E0421 04:14:20.065361 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:20.485932 containerd[1580]: time="2026-04-21T04:14:20.483302309Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 04:14:20.503761 systemd[1]: Started cri-containerd-281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3.scope - libcontainer container 281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3. Apr 21 04:14:21.216798 containerd[1580]: time="2026-04-21T04:14:21.215117614Z" level=info msg="Container 8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:14:21.664494 containerd[1580]: time="2026-04-21T04:14:21.651227791Z" level=info msg="CreateContainer within sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\"" Apr 21 04:14:21.772484 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:38654.service - OpenSSH per-connection server daemon (10.0.0.1:38654). Apr 21 04:14:21.844167 containerd[1580]: time="2026-04-21T04:14:21.810159415Z" level=info msg="StartContainer for \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\"" Apr 21 04:14:21.885254 containerd[1580]: time="2026-04-21T04:14:21.884971439Z" level=info msg="connecting to shim 8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df" address="unix:///run/containerd/s/6441ba6e02fe369efa68012eb39d2cfca5866b03fe7283870461e6419d2ab180" protocol=ttrpc version=3 Apr 21 04:14:22.600151 containerd[1580]: time="2026-04-21T04:14:22.546262186Z" level=error msg="get state for 281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3" error="context deadline exceeded" Apr 21 04:14:22.600151 containerd[1580]: time="2026-04-21T04:14:22.547287507Z" level=warning msg="unknown status" status=0 Apr 21 04:14:22.794562 containerd[1580]: time="2026-04-21T04:14:22.792168700Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 04:14:23.030300 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 38654 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:14:23.161326 systemd[1]: Started cri-containerd-8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df.scope - libcontainer container 8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df. Apr 21 04:14:23.167903 sshd-session[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:14:23.597334 systemd-logind[1564]: New session 21 of user core. Apr 21 04:14:23.655445 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 04:14:23.909250 kubelet[3069]: E0421 04:14:23.891968 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:24.228263 containerd[1580]: time="2026-04-21T04:14:24.226888462Z" level=info msg="StartContainer for \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\" returns successfully" Apr 21 04:14:25.384920 containerd[1580]: time="2026-04-21T04:14:25.293371948Z" level=error msg="get state for 8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df" error="context deadline exceeded" Apr 21 04:14:25.384920 containerd[1580]: time="2026-04-21T04:14:25.377174439Z" level=warning msg="unknown status" status=0 Apr 21 04:14:26.239627 kubelet[3069]: E0421 04:14:26.144737 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.662s" Apr 21 04:14:28.254756 kubelet[3069]: E0421 04:14:28.253531 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.797s" Apr 21 04:14:28.316620 kubelet[3069]: E0421 04:14:28.316278 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:29.250923 containerd[1580]: time="2026-04-21T04:14:29.229934493Z" level=error msg="get state for 8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df" error="context deadline exceeded" Apr 21 04:14:29.356300 containerd[1580]: time="2026-04-21T04:14:29.354403101Z" level=warning msg="unknown status" status=0 Apr 21 04:14:31.658671 kubelet[3069]: I0421 04:14:31.657619 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pdlcd" podStartSLOduration=32.897030331 podStartE2EDuration="3m18.657347412s" podCreationTimestamp="2026-04-21 04:11:13 +0000 UTC" firstStartedPulling="2026-04-21 04:11:33.044327282 +0000 UTC m=+70.469037780" lastFinishedPulling="2026-04-21 04:14:18.804644355 +0000 UTC m=+236.229354861" observedRunningTime="2026-04-21 04:14:31.651605852 +0000 UTC m=+249.076316369" watchObservedRunningTime="2026-04-21 04:14:31.657347412 +0000 UTC m=+249.082057920" Apr 21 04:14:32.135391 kubelet[3069]: E0421 04:14:31.659636 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:32.524522 containerd[1580]: time="2026-04-21T04:14:32.522540435Z" level=error msg="get state for 8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df" error="context deadline exceeded" Apr 21 04:14:32.524522 containerd[1580]: time="2026-04-21T04:14:32.524304202Z" level=warning msg="unknown status" status=0 Apr 21 04:14:33.295863 kubelet[3069]: E0421 04:14:33.293834 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:33.642094 containerd[1580]: time="2026-04-21T04:14:33.578485885Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 04:14:33.642094 containerd[1580]: time="2026-04-21T04:14:33.580371364Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 21 04:14:33.642094 containerd[1580]: time="2026-04-21T04:14:33.580653065Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 21 04:14:34.149244 sshd[4036]: Connection closed by 10.0.0.1 port 38654 Apr 21 04:14:34.433948 kubelet[3069]: E0421 04:14:34.201013 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.676s" Apr 21 04:14:34.259108 sshd-session[4002]: pam_unix(sshd:session): session closed for user core Apr 21 04:14:34.776511 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:38654.service: Deactivated successfully. Apr 21 04:14:34.778156 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Apr 21 04:14:35.209932 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 04:14:35.416551 systemd[1]: session-21.scope: Consumed 3.431s CPU time, 18.2M memory peak. Apr 21 04:14:35.771795 systemd-logind[1564]: Removed session 21. Apr 21 04:14:37.713624 kubelet[3069]: E0421 04:14:37.709453 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:38.734460 kubelet[3069]: E0421 04:14:38.732169 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.522s" Apr 21 04:14:40.000794 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:58018.service - OpenSSH per-connection server daemon (10.0.0.1:58018). Apr 21 04:14:40.456470 containerd[1580]: time="2026-04-21T04:14:40.347068043Z" level=info msg="StartContainer for \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\" returns successfully" Apr 21 04:14:41.503204 kubelet[3069]: E0421 04:14:41.495543 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:41.934118 kubelet[3069]: E0421 04:14:41.884424 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:43.131909 kubelet[3069]: E0421 04:14:43.130585 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.328s" Apr 21 04:14:43.489111 kubelet[3069]: E0421 04:14:43.487228 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:44.194949 kubelet[3069]: E0421 04:14:44.193829 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.044s" Apr 21 04:14:46.901652 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 58018 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:14:46.990517 containerd[1580]: time="2026-04-21T04:14:46.982662693Z" level=warning msg="container event discarded" container=5c04252940be9819a78e7377bf86d8abd0eb4356121027e17c5344a8c90e6e31 type=CONTAINER_STOPPED_EVENT Apr 21 04:14:47.444924 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:14:48.982271 containerd[1580]: time="2026-04-21T04:14:48.954273492Z" level=warning msg="container event discarded" container=51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260 type=CONTAINER_CREATED_EVENT Apr 21 04:14:49.166400 systemd-logind[1564]: New session 22 of user core. Apr 21 04:14:49.268227 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 04:14:50.453468 kubelet[3069]: E0421 04:14:50.440422 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:51.227460 kubelet[3069]: E0421 04:14:50.949530 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.487s" Apr 21 04:14:51.227460 kubelet[3069]: E0421 04:14:51.220061 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:14:52.286495 kubelet[3069]: E0421 04:14:52.246219 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.292s" Apr 21 04:14:55.914418 kubelet[3069]: E0421 04:14:55.913198 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:14:56.275585 sshd[4091]: Connection closed by 10.0.0.1 port 58018 Apr 21 04:14:56.305439 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Apr 21 04:14:56.715642 kubelet[3069]: E0421 04:14:56.408554 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.919s" Apr 21 04:14:57.049998 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:58018.service: Deactivated successfully. Apr 21 04:14:57.164814 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:58018.service: Consumed 1.433s CPU time, 4M memory peak. Apr 21 04:14:57.336732 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 04:14:57.369670 systemd[1]: session-22.scope: Consumed 2.505s CPU time, 17.8M memory peak. Apr 21 04:14:57.624020 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Apr 21 04:14:57.822248 kubelet[3069]: E0421 04:14:57.820435 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.331s" Apr 21 04:14:58.066258 systemd-logind[1564]: Removed session 22. Apr 21 04:14:58.960544 containerd[1580]: time="2026-04-21T04:14:58.955588624Z" level=warning msg="container event discarded" container=51b22528298019ccba19cd55a8b67111e3ab1b1b15f5732c6d2ecec63573f260 type=CONTAINER_STARTED_EVENT Apr 21 04:14:59.833425 kubelet[3069]: E0421 04:14:59.830582 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.141s" Apr 21 04:15:01.111541 kubelet[3069]: E0421 04:15:01.109617 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:15:01.847409 systemd[1]: Started sshd@22-10.0.0.144:22-10.0.0.1:43764.service - OpenSSH per-connection server daemon (10.0.0.1:43764). Apr 21 04:15:06.206601 kubelet[3069]: E0421 04:15:06.179500 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.61s" Apr 21 04:15:07.763116 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 43764 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:07.906981 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:08.587219 kubelet[3069]: E0421 04:15:08.546189 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:15:09.226136 systemd-logind[1564]: New session 23 of user core. Apr 21 04:15:09.239835 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 04:15:09.361148 kubelet[3069]: E0421 04:15:09.092561 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.622s" Apr 21 04:15:10.724642 kubelet[3069]: E0421 04:15:10.723652 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.362s" Apr 21 04:15:10.854364 kubelet[3069]: E0421 04:15:10.843301 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:15:12.921541 kubelet[3069]: I0421 04:15:12.897412 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jsw7z" podStartSLOduration=97.976813648 podStartE2EDuration="4m4.897281614s" podCreationTimestamp="2026-04-21 04:11:08 +0000 UTC" firstStartedPulling="2026-04-21 04:11:28.761547722 +0000 UTC m=+66.186258218" lastFinishedPulling="2026-04-21 04:13:55.68201568 +0000 UTC m=+213.106726184" observedRunningTime="2026-04-21 04:15:12.869643116 +0000 UTC m=+290.294353652" watchObservedRunningTime="2026-04-21 04:15:12.897281614 +0000 UTC m=+290.321992107" Apr 21 04:15:14.250390 sshd[4152]: Connection closed by 10.0.0.1 port 43764 Apr 21 04:15:14.255325 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:14.279198 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:43764.service: Deactivated successfully. Apr 21 04:15:14.282373 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:43764.service: Consumed 1.433s CPU time, 3.8M memory peak. Apr 21 04:15:14.299371 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 04:15:14.304300 systemd[1]: session-23.scope: Consumed 1.656s CPU time, 17.5M memory peak. Apr 21 04:15:14.320783 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. Apr 21 04:15:14.348223 systemd[1]: Started sshd@23-10.0.0.144:22-10.0.0.1:57088.service - OpenSSH per-connection server daemon (10.0.0.1:57088). Apr 21 04:15:14.350579 systemd-logind[1564]: Removed session 23. Apr 21 04:15:14.935040 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 57088 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:14.949133 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:15.016905 systemd-logind[1564]: New session 24 of user core. Apr 21 04:15:15.032878 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 04:15:15.772649 sshd[4184]: Connection closed by 10.0.0.1 port 57088 Apr 21 04:15:15.778828 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:15.848259 systemd[1]: Started sshd@24-10.0.0.144:22-10.0.0.1:56888.service - OpenSSH per-connection server daemon (10.0.0.1:56888). Apr 21 04:15:15.852455 systemd[1]: sshd@23-10.0.0.144:22-10.0.0.1:57088.service: Deactivated successfully. Apr 21 04:15:15.875793 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 04:15:15.878473 kubelet[3069]: E0421 04:15:15.877977 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:15:15.888318 systemd-logind[1564]: Session 24 logged out. Waiting for processes to exit. Apr 21 04:15:15.912542 systemd-logind[1564]: Removed session 24. Apr 21 04:15:16.156235 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 56888 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:16.179563 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:16.226058 systemd-logind[1564]: New session 25 of user core. Apr 21 04:15:16.231820 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 04:15:16.633625 sshd[4224]: Connection closed by 10.0.0.1 port 56888 Apr 21 04:15:16.638057 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:16.673550 systemd[1]: sshd@24-10.0.0.144:22-10.0.0.1:56888.service: Deactivated successfully. Apr 21 04:15:16.680655 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 04:15:16.682382 systemd-logind[1564]: Session 25 logged out. Waiting for processes to exit. Apr 21 04:15:16.689007 systemd-logind[1564]: Removed session 25. Apr 21 04:15:18.386883 systemd-networkd[1483]: cilium_host: Link UP Apr 21 04:15:18.396145 systemd-networkd[1483]: cilium_net: Link UP Apr 21 04:15:18.396748 systemd-networkd[1483]: cilium_net: Gained carrier Apr 21 04:15:18.396967 systemd-networkd[1483]: cilium_host: Gained carrier Apr 21 04:15:18.588417 systemd-networkd[1483]: cilium_net: Gained IPv6LL Apr 21 04:15:19.433109 systemd-networkd[1483]: cilium_host: Gained IPv6LL Apr 21 04:15:19.436169 systemd-networkd[1483]: cilium_vxlan: Link UP Apr 21 04:15:19.436177 systemd-networkd[1483]: cilium_vxlan: Gained carrier Apr 21 04:15:20.656990 systemd-networkd[1483]: cilium_vxlan: Gained IPv6LL Apr 21 04:15:20.872191 kernel: NET: Registered PF_ALG protocol family Apr 21 04:15:21.788031 systemd[1]: Started sshd@25-10.0.0.144:22-10.0.0.1:56896.service - OpenSSH per-connection server daemon (10.0.0.1:56896). Apr 21 04:15:22.110025 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 56896 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:22.114048 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:22.142968 systemd-logind[1564]: New session 26 of user core. Apr 21 04:15:22.154077 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 04:15:22.709104 sshd[4367]: Connection closed by 10.0.0.1 port 56896 Apr 21 04:15:22.713337 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:22.730151 systemd[1]: sshd@25-10.0.0.144:22-10.0.0.1:56896.service: Deactivated successfully. Apr 21 04:15:22.765150 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 04:15:22.778516 systemd-logind[1564]: Session 26 logged out. Waiting for processes to exit. Apr 21 04:15:22.800338 systemd-logind[1564]: Removed session 26. Apr 21 04:15:25.726995 systemd-networkd[1483]: lxc_health: Link UP Apr 21 04:15:25.735966 systemd-networkd[1483]: lxc_health: Gained carrier Apr 21 04:15:25.977326 kubelet[3069]: E0421 04:15:25.951587 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:15:26.168331 kubelet[3069]: E0421 04:15:26.166367 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:15:27.562030 systemd-networkd[1483]: lxc_health: Gained IPv6LL Apr 21 04:15:27.762145 systemd[1]: Started sshd@26-10.0.0.144:22-10.0.0.1:48130.service - OpenSSH per-connection server daemon (10.0.0.1:48130). Apr 21 04:15:28.130829 sshd[4624]: Accepted publickey for core from 10.0.0.1 port 48130 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:28.141970 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:28.188421 systemd-logind[1564]: New session 27 of user core. Apr 21 04:15:28.201308 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 21 04:15:28.481747 kubelet[3069]: E0421 04:15:28.476687 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:15:28.739493 sshd[4630]: Connection closed by 10.0.0.1 port 48130 Apr 21 04:15:28.743669 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:28.808410 systemd[1]: sshd@26-10.0.0.144:22-10.0.0.1:48130.service: Deactivated successfully. Apr 21 04:15:28.888433 systemd[1]: session-27.scope: Deactivated successfully. Apr 21 04:15:28.965352 systemd-logind[1564]: Session 27 logged out. Waiting for processes to exit. Apr 21 04:15:28.996177 systemd-logind[1564]: Removed session 27. Apr 21 04:15:33.878145 systemd[1]: Started sshd@27-10.0.0.144:22-10.0.0.1:48134.service - OpenSSH per-connection server daemon (10.0.0.1:48134). Apr 21 04:15:34.315850 sshd[4653]: Accepted publickey for core from 10.0.0.1 port 48134 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:34.323297 sshd-session[4653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:34.356947 systemd-logind[1564]: New session 28 of user core. Apr 21 04:15:34.370008 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 21 04:15:35.186777 sshd[4656]: Connection closed by 10.0.0.1 port 48134 Apr 21 04:15:35.186160 sshd-session[4653]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:35.275255 systemd[1]: sshd@27-10.0.0.144:22-10.0.0.1:48134.service: Deactivated successfully. Apr 21 04:15:35.296245 systemd[1]: session-28.scope: Deactivated successfully. Apr 21 04:15:35.300006 systemd-logind[1564]: Session 28 logged out. Waiting for processes to exit. Apr 21 04:15:35.303616 systemd-logind[1564]: Removed session 28. Apr 21 04:15:38.466231 kubelet[3069]: E0421 04:15:38.462013 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:15:40.245036 systemd[1]: Started sshd@28-10.0.0.144:22-10.0.0.1:57788.service - OpenSSH per-connection server daemon (10.0.0.1:57788). Apr 21 04:15:40.800052 sshd[4674]: Accepted publickey for core from 10.0.0.1 port 57788 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:40.807266 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:40.842179 systemd-logind[1564]: New session 29 of user core. Apr 21 04:15:40.884870 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 21 04:15:41.333759 sshd[4677]: Connection closed by 10.0.0.1 port 57788 Apr 21 04:15:41.345992 sshd-session[4674]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:41.406086 systemd[1]: Started sshd@29-10.0.0.144:22-10.0.0.1:57794.service - OpenSSH per-connection server daemon (10.0.0.1:57794). Apr 21 04:15:41.431555 systemd[1]: sshd@28-10.0.0.144:22-10.0.0.1:57788.service: Deactivated successfully. Apr 21 04:15:41.462168 systemd[1]: session-29.scope: Deactivated successfully. Apr 21 04:15:41.632507 systemd-logind[1564]: Session 29 logged out. Waiting for processes to exit. Apr 21 04:15:41.637869 systemd-logind[1564]: Removed session 29. Apr 21 04:15:42.052204 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 57794 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:42.075749 sshd-session[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:42.179530 systemd-logind[1564]: New session 30 of user core. Apr 21 04:15:42.215445 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 21 04:15:44.074252 sshd[4699]: Connection closed by 10.0.0.1 port 57794 Apr 21 04:15:44.091361 sshd-session[4687]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:44.227738 systemd[1]: sshd@29-10.0.0.144:22-10.0.0.1:57794.service: Deactivated successfully. Apr 21 04:15:44.315877 systemd[1]: session-30.scope: Deactivated successfully. Apr 21 04:15:44.393848 systemd-logind[1564]: Session 30 logged out. Waiting for processes to exit. Apr 21 04:15:44.419003 systemd[1]: Started sshd@30-10.0.0.144:22-10.0.0.1:57810.service - OpenSSH per-connection server daemon (10.0.0.1:57810). Apr 21 04:15:44.432952 systemd-logind[1564]: Removed session 30. Apr 21 04:15:44.742162 sshd[4714]: Accepted publickey for core from 10.0.0.1 port 57810 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:44.743492 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:44.798901 systemd-logind[1564]: New session 31 of user core. Apr 21 04:15:44.880490 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 21 04:15:48.532387 kubelet[3069]: E0421 04:15:48.529806 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:15:50.409534 sshd[4717]: Connection closed by 10.0.0.1 port 57810 Apr 21 04:15:50.437954 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:50.724449 systemd[1]: sshd@30-10.0.0.144:22-10.0.0.1:57810.service: Deactivated successfully. Apr 21 04:15:50.869428 systemd[1]: session-31.scope: Deactivated successfully. Apr 21 04:15:50.884594 systemd[1]: session-31.scope: Consumed 2.555s CPU time, 46.8M memory peak. Apr 21 04:15:51.025364 systemd-logind[1564]: Session 31 logged out. Waiting for processes to exit. Apr 21 04:15:51.116295 systemd[1]: Started sshd@31-10.0.0.144:22-10.0.0.1:58016.service - OpenSSH per-connection server daemon (10.0.0.1:58016). Apr 21 04:15:51.253450 systemd-logind[1564]: Removed session 31. Apr 21 04:15:51.661844 kubelet[3069]: E0421 04:15:51.661219 3069 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.222s" Apr 21 04:15:51.938121 sshd[4739]: Accepted publickey for core from 10.0.0.1 port 58016 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:51.965766 sshd-session[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:52.156294 systemd-logind[1564]: New session 32 of user core. Apr 21 04:15:52.164266 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 21 04:15:54.661844 sshd[4743]: Connection closed by 10.0.0.1 port 58016 Apr 21 04:15:54.665768 sshd-session[4739]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:54.770643 systemd[1]: sshd@31-10.0.0.144:22-10.0.0.1:58016.service: Deactivated successfully. Apr 21 04:15:54.890293 systemd[1]: session-32.scope: Deactivated successfully. Apr 21 04:15:54.930935 systemd[1]: session-32.scope: Consumed 1.568s CPU time, 30.1M memory peak. Apr 21 04:15:55.123043 systemd-logind[1564]: Session 32 logged out. Waiting for processes to exit. Apr 21 04:15:55.165632 systemd[1]: Started sshd@32-10.0.0.144:22-10.0.0.1:58020.service - OpenSSH per-connection server daemon (10.0.0.1:58020). Apr 21 04:15:55.260932 systemd-logind[1564]: Removed session 32. Apr 21 04:15:56.031162 sshd[4756]: Accepted publickey for core from 10.0.0.1 port 58020 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:15:56.117489 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:15:56.306458 systemd-logind[1564]: New session 33 of user core. Apr 21 04:15:56.339381 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 21 04:15:57.367849 sshd[4759]: Connection closed by 10.0.0.1 port 58020 Apr 21 04:15:57.387057 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Apr 21 04:15:57.531167 systemd[1]: sshd@32-10.0.0.144:22-10.0.0.1:58020.service: Deactivated successfully. Apr 21 04:15:57.599308 systemd[1]: session-33.scope: Deactivated successfully. Apr 21 04:15:57.644848 systemd-logind[1564]: Session 33 logged out. Waiting for processes to exit. Apr 21 04:15:57.670587 systemd-logind[1564]: Removed session 33. Apr 21 04:16:02.534399 systemd[1]: Started sshd@33-10.0.0.144:22-10.0.0.1:35626.service - OpenSSH per-connection server daemon (10.0.0.1:35626). Apr 21 04:16:03.295402 sshd[4778]: Accepted publickey for core from 10.0.0.1 port 35626 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:16:03.307608 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:16:03.351856 systemd-logind[1564]: New session 34 of user core. Apr 21 04:16:03.363614 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 21 04:16:03.699157 sshd[4781]: Connection closed by 10.0.0.1 port 35626 Apr 21 04:16:03.704565 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Apr 21 04:16:03.745560 systemd-logind[1564]: Session 34 logged out. Waiting for processes to exit. Apr 21 04:16:03.747755 systemd[1]: sshd@33-10.0.0.144:22-10.0.0.1:35626.service: Deactivated successfully. Apr 21 04:16:03.856807 systemd[1]: session-34.scope: Deactivated successfully. Apr 21 04:16:03.894877 systemd-logind[1564]: Removed session 34. Apr 21 04:16:08.757432 systemd[1]: Started sshd@34-10.0.0.144:22-10.0.0.1:36292.service - OpenSSH per-connection server daemon (10.0.0.1:36292). Apr 21 04:16:08.892610 sshd[4797]: Accepted publickey for core from 10.0.0.1 port 36292 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:16:08.896129 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:16:09.030343 systemd-logind[1564]: New session 35 of user core. Apr 21 04:16:09.048263 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 21 04:16:09.363148 sshd[4800]: Connection closed by 10.0.0.1 port 36292 Apr 21 04:16:09.364081 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Apr 21 04:16:09.381239 systemd[1]: sshd@34-10.0.0.144:22-10.0.0.1:36292.service: Deactivated successfully. Apr 21 04:16:09.386368 systemd[1]: session-35.scope: Deactivated successfully. Apr 21 04:16:09.387523 systemd-logind[1564]: Session 35 logged out. Waiting for processes to exit. Apr 21 04:16:09.389351 systemd-logind[1564]: Removed session 35. Apr 21 04:16:09.450030 kubelet[3069]: E0421 04:16:09.449782 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:13.470821 kubelet[3069]: E0421 04:16:13.470566 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:14.412186 systemd[1]: Started sshd@35-10.0.0.144:22-10.0.0.1:36294.service - OpenSSH per-connection server daemon (10.0.0.1:36294). Apr 21 04:16:14.573467 sshd[4814]: Accepted publickey for core from 10.0.0.1 port 36294 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:16:14.589221 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:16:14.620666 systemd-logind[1564]: New session 36 of user core. Apr 21 04:16:14.637312 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 21 04:16:16.280218 sshd[4817]: Connection closed by 10.0.0.1 port 36294 Apr 21 04:16:16.347158 sshd-session[4814]: pam_unix(sshd:session): session closed for user core Apr 21 04:16:16.537249 systemd[1]: sshd@35-10.0.0.144:22-10.0.0.1:36294.service: Deactivated successfully. Apr 21 04:16:16.721399 systemd[1]: session-36.scope: Deactivated successfully. Apr 21 04:16:16.748440 systemd[1]: session-36.scope: Consumed 1.080s CPU time, 15.9M memory peak. Apr 21 04:16:16.873166 systemd-logind[1564]: Session 36 logged out. Waiting for processes to exit. Apr 21 04:16:16.941335 systemd[1]: Started sshd@36-10.0.0.144:22-10.0.0.1:47784.service - OpenSSH per-connection server daemon (10.0.0.1:47784). Apr 21 04:16:17.090309 systemd-logind[1564]: Removed session 36. Apr 21 04:16:18.163017 sshd[4831]: Accepted publickey for core from 10.0.0.1 port 47784 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:16:18.190279 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:16:18.385466 systemd-logind[1564]: New session 37 of user core. Apr 21 04:16:18.432536 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 21 04:16:21.053145 containerd[1580]: time="2026-04-21T04:16:21.052805319Z" level=info msg="StopContainer for \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\" with timeout 30 (s)" Apr 21 04:16:21.077449 containerd[1580]: time="2026-04-21T04:16:21.077066277Z" level=info msg="Stop container \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\" with signal terminated" Apr 21 04:16:21.104901 containerd[1580]: time="2026-04-21T04:16:21.103305237Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 04:16:21.122946 containerd[1580]: time="2026-04-21T04:16:21.122897654Z" level=info msg="StopContainer for \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\" with timeout 2 (s)" Apr 21 04:16:21.123723 containerd[1580]: time="2026-04-21T04:16:21.123658444Z" level=info msg="Stop container \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\" with signal terminated" Apr 21 04:16:21.127750 systemd[1]: cri-containerd-281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3.scope: Deactivated successfully. Apr 21 04:16:21.128641 systemd[1]: cri-containerd-281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3.scope: Consumed 6.824s CPU time, 30.1M memory peak, 4K written to disk. Apr 21 04:16:21.136744 containerd[1580]: time="2026-04-21T04:16:21.136096400Z" level=info msg="received container exit event container_id:\"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\" id:\"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\" pid:3995 exited_at:{seconds:1776744981 nanos:134677027}" Apr 21 04:16:21.149320 systemd-networkd[1483]: lxc_health: Link DOWN Apr 21 04:16:21.149338 systemd-networkd[1483]: lxc_health: Lost carrier Apr 21 04:16:21.172066 systemd[1]: cri-containerd-8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df.scope: Deactivated successfully. Apr 21 04:16:21.172462 systemd[1]: cri-containerd-8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df.scope: Consumed 39.290s CPU time, 129.6M memory peak, 7.4M read from disk, 13.3M written to disk. Apr 21 04:16:21.181449 containerd[1580]: time="2026-04-21T04:16:21.181351625Z" level=info msg="received container exit event container_id:\"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\" id:\"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\" pid:4024 exited_at:{seconds:1776744981 nanos:180932056}" Apr 21 04:16:21.184115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3-rootfs.mount: Deactivated successfully. Apr 21 04:16:21.217558 containerd[1580]: time="2026-04-21T04:16:21.217435749Z" level=info msg="StopContainer for \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\" returns successfully" Apr 21 04:16:21.225153 containerd[1580]: time="2026-04-21T04:16:21.225101321Z" level=info msg="StopPodSandbox for \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\"" Apr 21 04:16:21.249105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df-rootfs.mount: Deactivated successfully. Apr 21 04:16:21.268079 containerd[1580]: time="2026-04-21T04:16:21.267872253Z" level=info msg="StopContainer for \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\" returns successfully" Apr 21 04:16:21.271532 containerd[1580]: time="2026-04-21T04:16:21.271469318Z" level=info msg="StopPodSandbox for \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\"" Apr 21 04:16:21.275941 containerd[1580]: time="2026-04-21T04:16:21.275833381Z" level=info msg="Container to stop \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 04:16:21.275941 containerd[1580]: time="2026-04-21T04:16:21.275906080Z" level=info msg="Container to stop \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 04:16:21.275941 containerd[1580]: time="2026-04-21T04:16:21.275920152Z" level=info msg="Container to stop \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 04:16:21.275941 containerd[1580]: time="2026-04-21T04:16:21.275931798Z" level=info msg="Container to stop \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 04:16:21.275941 containerd[1580]: time="2026-04-21T04:16:21.275943449Z" level=info msg="Container to stop \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 04:16:21.277175 containerd[1580]: time="2026-04-21T04:16:21.277114993Z" level=info msg="Container to stop \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 04:16:21.316953 systemd[1]: cri-containerd-0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe.scope: Deactivated successfully. Apr 21 04:16:21.323067 systemd[1]: cri-containerd-6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11.scope: Deactivated successfully. Apr 21 04:16:21.326601 containerd[1580]: time="2026-04-21T04:16:21.326423853Z" level=info msg="received sandbox exit event container_id:\"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" id:\"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" exit_status:137 exited_at:{seconds:1776744981 nanos:323014009}" monitor_name=podsandbox Apr 21 04:16:21.358475 containerd[1580]: time="2026-04-21T04:16:21.357454962Z" level=info msg="received sandbox exit event container_id:\"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" id:\"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" exit_status:137 exited_at:{seconds:1776744981 nanos:319980165}" monitor_name=podsandbox Apr 21 04:16:21.532739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe-rootfs.mount: Deactivated successfully. Apr 21 04:16:21.584763 containerd[1580]: time="2026-04-21T04:16:21.583131463Z" level=info msg="shim disconnected" id=0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe namespace=k8s.io Apr 21 04:16:21.584763 containerd[1580]: time="2026-04-21T04:16:21.583217830Z" level=warning msg="cleaning up after shim disconnected" id=0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe namespace=k8s.io Apr 21 04:16:21.584763 containerd[1580]: time="2026-04-21T04:16:21.583229354Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 04:16:21.607070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11-rootfs.mount: Deactivated successfully. Apr 21 04:16:21.658745 containerd[1580]: time="2026-04-21T04:16:21.658225689Z" level=info msg="shim disconnected" id=6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11 namespace=k8s.io Apr 21 04:16:21.683319 containerd[1580]: time="2026-04-21T04:16:21.670175670Z" level=warning msg="cleaning up after shim disconnected" id=6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11 namespace=k8s.io Apr 21 04:16:21.685467 containerd[1580]: time="2026-04-21T04:16:21.684968611Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 04:16:21.732750 containerd[1580]: time="2026-04-21T04:16:21.731671101Z" level=info msg="TearDown network for sandbox \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" successfully" Apr 21 04:16:21.732750 containerd[1580]: time="2026-04-21T04:16:21.732624668Z" level=info msg="StopPodSandbox for \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" returns successfully" Apr 21 04:16:21.733971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe-shm.mount: Deactivated successfully. Apr 21 04:16:21.736319 containerd[1580]: time="2026-04-21T04:16:21.736262321Z" level=info msg="TearDown network for sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" successfully" Apr 21 04:16:21.736319 containerd[1580]: time="2026-04-21T04:16:21.736309979Z" level=info msg="StopPodSandbox for \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" returns successfully" Apr 21 04:16:21.747989 containerd[1580]: time="2026-04-21T04:16:21.747928016Z" level=info msg="received sandbox container exit event sandbox_id:\"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" exit_status:137 exited_at:{seconds:1776744981 nanos:323014009}" monitor_name=criService Apr 21 04:16:21.748172 containerd[1580]: time="2026-04-21T04:16:21.748122775Z" level=info msg="received sandbox container exit event sandbox_id:\"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" exit_status:137 exited_at:{seconds:1776744981 nanos:319980165}" monitor_name=criService Apr 21 04:16:21.906051 kubelet[3069]: I0421 04:16:21.901408 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-run\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.906051 kubelet[3069]: I0421 04:16:21.904786 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-lib-modules\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.913489 kubelet[3069]: I0421 04:16:21.906901 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cni-path\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.913489 kubelet[3069]: I0421 04:16:21.907430 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52wht\" (UniqueName: \"kubernetes.io/projected/5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77-kube-api-access-52wht\") pod \"5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77\" (UID: \"5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77\") " Apr 21 04:16:21.913489 kubelet[3069]: I0421 04:16:21.907568 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-etc-cni-netd\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.913489 kubelet[3069]: I0421 04:16:21.907551 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:21.913489 kubelet[3069]: I0421 04:16:21.907687 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-xtables-lock\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.913489 kubelet[3069]: I0421 04:16:21.911391 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-config-path\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.921059 kubelet[3069]: I0421 04:16:21.911614 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77-cilium-config-path\") pod \"5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77\" (UID: \"5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77\") " Apr 21 04:16:21.921059 kubelet[3069]: I0421 04:16:21.911382 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:21.921059 kubelet[3069]: I0421 04:16:21.911731 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs8zj\" (UniqueName: \"kubernetes.io/projected/97fa60f8-356e-4d9e-8041-db7e5215b397-kube-api-access-qs8zj\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.921059 kubelet[3069]: I0421 04:16:21.911760 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97fa60f8-356e-4d9e-8041-db7e5215b397-hubble-tls\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.921059 kubelet[3069]: I0421 04:16:21.911782 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-hostproc\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.921059 kubelet[3069]: I0421 04:16:21.912018 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-bpf-maps\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.921971 kubelet[3069]: I0421 04:16:21.912116 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-host-proc-sys-kernel\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.921971 kubelet[3069]: I0421 04:16:21.912147 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-cgroup\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.921971 kubelet[3069]: I0421 04:16:21.912392 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97fa60f8-356e-4d9e-8041-db7e5215b397-clustermesh-secrets\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.921971 kubelet[3069]: I0421 04:16:21.912431 3069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-host-proc-sys-net\") pod \"97fa60f8-356e-4d9e-8041-db7e5215b397\" (UID: \"97fa60f8-356e-4d9e-8041-db7e5215b397\") " Apr 21 04:16:21.921971 kubelet[3069]: I0421 04:16:21.912581 3069 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:21.921971 kubelet[3069]: I0421 04:16:21.903644 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:21.922904 kubelet[3069]: I0421 04:16:21.922575 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:21.944052 kubelet[3069]: I0421 04:16:21.941116 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cni-path" (OuterVolumeSpecName: "cni-path") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:21.974272 kubelet[3069]: I0421 04:16:21.972792 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:21.986756 kubelet[3069]: I0421 04:16:21.984875 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:21.994593 kubelet[3069]: I0421 04:16:21.991558 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:21.999833 kubelet[3069]: I0421 04:16:21.999464 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77" (UID: "5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 04:16:21.999833 kubelet[3069]: I0421 04:16:21.994001 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77-kube-api-access-52wht" (OuterVolumeSpecName: "kube-api-access-52wht") pod "5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77" (UID: "5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77"). InnerVolumeSpecName "kube-api-access-52wht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 04:16:22.000860 kubelet[3069]: I0421 04:16:22.000828 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-hostproc" (OuterVolumeSpecName: "hostproc") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:22.003845 kubelet[3069]: I0421 04:16:22.002386 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 04:16:22.007658 kubelet[3069]: I0421 04:16:22.004048 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 04:16:22.014344 kubelet[3069]: I0421 04:16:22.013977 3069 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014344 kubelet[3069]: I0421 04:16:22.014100 3069 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014344 kubelet[3069]: I0421 04:16:22.014114 3069 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014344 kubelet[3069]: I0421 04:16:22.014125 3069 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-52wht\" (UniqueName: \"kubernetes.io/projected/5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77-kube-api-access-52wht\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014344 kubelet[3069]: I0421 04:16:22.014188 3069 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014344 kubelet[3069]: I0421 04:16:22.014204 3069 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014344 kubelet[3069]: I0421 04:16:22.014215 3069 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014344 kubelet[3069]: I0421 04:16:22.014277 3069 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014928 kubelet[3069]: I0421 04:16:22.014291 3069 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014928 kubelet[3069]: I0421 04:16:22.014300 3069 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014928 kubelet[3069]: I0421 04:16:22.014311 3069 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.014928 kubelet[3069]: I0421 04:16:22.014371 3069 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97fa60f8-356e-4d9e-8041-db7e5215b397-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.040305 kubelet[3069]: I0421 04:16:22.039924 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97fa60f8-356e-4d9e-8041-db7e5215b397-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 04:16:22.041459 kubelet[3069]: I0421 04:16:22.040921 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97fa60f8-356e-4d9e-8041-db7e5215b397-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 04:16:22.049683 kubelet[3069]: I0421 04:16:22.047011 3069 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97fa60f8-356e-4d9e-8041-db7e5215b397-kube-api-access-qs8zj" (OuterVolumeSpecName: "kube-api-access-qs8zj") pod "97fa60f8-356e-4d9e-8041-db7e5215b397" (UID: "97fa60f8-356e-4d9e-8041-db7e5215b397"). InnerVolumeSpecName "kube-api-access-qs8zj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 04:16:22.117413 kubelet[3069]: I0421 04:16:22.116481 3069 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97fa60f8-356e-4d9e-8041-db7e5215b397-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.117413 kubelet[3069]: I0421 04:16:22.117094 3069 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qs8zj\" (UniqueName: \"kubernetes.io/projected/97fa60f8-356e-4d9e-8041-db7e5215b397-kube-api-access-qs8zj\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.117413 kubelet[3069]: I0421 04:16:22.117133 3069 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97fa60f8-356e-4d9e-8041-db7e5215b397-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 21 04:16:22.147191 kubelet[3069]: I0421 04:16:22.140101 3069 scope.go:117] "RemoveContainer" containerID="8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df" Apr 21 04:16:22.179040 systemd[1]: Removed slice kubepods-burstable-pod97fa60f8_356e_4d9e_8041_db7e5215b397.slice - libcontainer container kubepods-burstable-pod97fa60f8_356e_4d9e_8041_db7e5215b397.slice. Apr 21 04:16:22.179205 systemd[1]: kubepods-burstable-pod97fa60f8_356e_4d9e_8041_db7e5215b397.slice: Consumed 40.953s CPU time, 129.9M memory peak, 7.5M read from disk, 13.3M written to disk. Apr 21 04:16:22.184588 containerd[1580]: time="2026-04-21T04:16:22.180656631Z" level=info msg="RemoveContainer for \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\"" Apr 21 04:16:22.185389 systemd[1]: var-lib-kubelet-pods-5d5a89ab\x2da873\x2d4aaf\x2db4d2\x2d1ce3236c8c77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d52wht.mount: Deactivated successfully. Apr 21 04:16:22.185537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11-shm.mount: Deactivated successfully. Apr 21 04:16:22.185631 systemd[1]: var-lib-kubelet-pods-97fa60f8\x2d356e\x2d4d9e\x2d8041\x2ddb7e5215b397-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqs8zj.mount: Deactivated successfully. Apr 21 04:16:22.185746 systemd[1]: var-lib-kubelet-pods-97fa60f8\x2d356e\x2d4d9e\x2d8041\x2ddb7e5215b397-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 21 04:16:22.185848 systemd[1]: var-lib-kubelet-pods-97fa60f8\x2d356e\x2d4d9e\x2d8041\x2ddb7e5215b397-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 21 04:16:22.253581 systemd[1]: Removed slice kubepods-besteffort-pod5d5a89ab_a873_4aaf_b4d2_1ce3236c8c77.slice - libcontainer container kubepods-besteffort-pod5d5a89ab_a873_4aaf_b4d2_1ce3236c8c77.slice. Apr 21 04:16:22.255085 systemd[1]: kubepods-besteffort-pod5d5a89ab_a873_4aaf_b4d2_1ce3236c8c77.slice: Consumed 7.204s CPU time, 30.3M memory peak, 4K written to disk. Apr 21 04:16:22.286757 containerd[1580]: time="2026-04-21T04:16:22.285141627Z" level=info msg="RemoveContainer for \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\" returns successfully" Apr 21 04:16:22.296498 kubelet[3069]: I0421 04:16:22.295455 3069 scope.go:117] "RemoveContainer" containerID="dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b" Apr 21 04:16:22.349206 containerd[1580]: time="2026-04-21T04:16:22.349000909Z" level=info msg="RemoveContainer for \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\"" Apr 21 04:16:22.368970 containerd[1580]: time="2026-04-21T04:16:22.368785770Z" level=info msg="RemoveContainer for \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\" returns successfully" Apr 21 04:16:22.372523 kubelet[3069]: I0421 04:16:22.372246 3069 scope.go:117] "RemoveContainer" containerID="8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491" Apr 21 04:16:22.412251 containerd[1580]: time="2026-04-21T04:16:22.411892242Z" level=info msg="RemoveContainer for \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\"" Apr 21 04:16:22.436637 containerd[1580]: time="2026-04-21T04:16:22.434266580Z" level=info msg="RemoveContainer for \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\" returns successfully" Apr 21 04:16:22.450407 kubelet[3069]: I0421 04:16:22.450008 3069 scope.go:117] "RemoveContainer" containerID="1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4" Apr 21 04:16:22.471775 kubelet[3069]: I0421 04:16:22.471595 3069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97fa60f8-356e-4d9e-8041-db7e5215b397" path="/var/lib/kubelet/pods/97fa60f8-356e-4d9e-8041-db7e5215b397/volumes" Apr 21 04:16:22.474741 containerd[1580]: time="2026-04-21T04:16:22.474651246Z" level=info msg="RemoveContainer for \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\"" Apr 21 04:16:22.505569 containerd[1580]: time="2026-04-21T04:16:22.505001276Z" level=info msg="RemoveContainer for \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\" returns successfully" Apr 21 04:16:22.510462 kubelet[3069]: I0421 04:16:22.507776 3069 scope.go:117] "RemoveContainer" containerID="0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457" Apr 21 04:16:22.552517 containerd[1580]: time="2026-04-21T04:16:22.550877861Z" level=info msg="RemoveContainer for \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\"" Apr 21 04:16:22.605769 containerd[1580]: time="2026-04-21T04:16:22.604326840Z" level=info msg="RemoveContainer for \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\" returns successfully" Apr 21 04:16:22.607660 kubelet[3069]: I0421 04:16:22.607570 3069 scope.go:117] "RemoveContainer" containerID="8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df" Apr 21 04:16:22.616947 containerd[1580]: time="2026-04-21T04:16:22.614664804Z" level=error msg="ContainerStatus for \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\": not found" Apr 21 04:16:22.631735 kubelet[3069]: E0421 04:16:22.631563 3069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\": not found" containerID="8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df" Apr 21 04:16:22.643813 kubelet[3069]: I0421 04:16:22.639303 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df"} err="failed to get container status \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e77ae36e3de06e8fc91c547d2590b3978ee3c7f83910cb65aa2341d0e63a5df\": not found" Apr 21 04:16:22.646631 kubelet[3069]: I0421 04:16:22.644932 3069 scope.go:117] "RemoveContainer" containerID="dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b" Apr 21 04:16:22.651134 containerd[1580]: time="2026-04-21T04:16:22.649774410Z" level=error msg="ContainerStatus for \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\": not found" Apr 21 04:16:22.651134 containerd[1580]: time="2026-04-21T04:16:22.651073415Z" level=error msg="ContainerStatus for \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\": not found" Apr 21 04:16:22.651483 kubelet[3069]: E0421 04:16:22.650229 3069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\": not found" containerID="dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b" Apr 21 04:16:22.651483 kubelet[3069]: I0421 04:16:22.650522 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b"} err="failed to get container status \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\": rpc error: code = NotFound desc = an error occurred when try to find container \"dee8528fab5071d730aaad5f837a5a2654a96d1205b673ccddd20293f0869d5b\": not found" Apr 21 04:16:22.651483 kubelet[3069]: I0421 04:16:22.650654 3069 scope.go:117] "RemoveContainer" containerID="8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491" Apr 21 04:16:22.651483 kubelet[3069]: E0421 04:16:22.651310 3069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\": not found" containerID="8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491" Apr 21 04:16:22.651483 kubelet[3069]: I0421 04:16:22.651443 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491"} err="failed to get container status \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e0083ec3d5cab8b144efea9955fcc11733e0baba4f830d2859c9d6e391c3491\": not found" Apr 21 04:16:22.651483 kubelet[3069]: I0421 04:16:22.651471 3069 scope.go:117] "RemoveContainer" containerID="1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4" Apr 21 04:16:22.651986 containerd[1580]: time="2026-04-21T04:16:22.651795322Z" level=error msg="ContainerStatus for \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\": not found" Apr 21 04:16:22.652029 kubelet[3069]: E0421 04:16:22.651994 3069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\": not found" containerID="1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4" Apr 21 04:16:22.652056 kubelet[3069]: I0421 04:16:22.652020 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4"} err="failed to get container status \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b6f5d6102a6593dbe25c46e3d729b632af17fb4da2f09f65e10aefa005012c4\": not found" Apr 21 04:16:22.652056 kubelet[3069]: I0421 04:16:22.652045 3069 scope.go:117] "RemoveContainer" containerID="0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457" Apr 21 04:16:22.656193 containerd[1580]: time="2026-04-21T04:16:22.654010728Z" level=error msg="ContainerStatus for \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\": not found" Apr 21 04:16:22.657200 kubelet[3069]: E0421 04:16:22.656918 3069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\": not found" containerID="0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457" Apr 21 04:16:22.657620 kubelet[3069]: I0421 04:16:22.657495 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457"} err="failed to get container status \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\": rpc error: code = NotFound desc = an error occurred when try to find container \"0712ed02175877f17f7cd23a7140babc3daced31ec67f6c02d0381a7a279e457\": not found" Apr 21 04:16:22.657824 kubelet[3069]: I0421 04:16:22.657756 3069 scope.go:117] "RemoveContainer" containerID="281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3" Apr 21 04:16:22.695527 containerd[1580]: time="2026-04-21T04:16:22.691929341Z" level=info msg="RemoveContainer for \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\"" Apr 21 04:16:22.730322 containerd[1580]: time="2026-04-21T04:16:22.730093345Z" level=info msg="RemoveContainer for \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\" returns successfully" Apr 21 04:16:22.747719 kubelet[3069]: I0421 04:16:22.747132 3069 scope.go:117] "RemoveContainer" containerID="281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3" Apr 21 04:16:22.757195 containerd[1580]: time="2026-04-21T04:16:22.754819072Z" level=error msg="ContainerStatus for \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\": not found" Apr 21 04:16:22.758940 kubelet[3069]: E0421 04:16:22.758518 3069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\": not found" containerID="281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3" Apr 21 04:16:22.759364 kubelet[3069]: I0421 04:16:22.758980 3069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3"} err="failed to get container status \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\": rpc error: code = NotFound desc = an error occurred when try to find container \"281a73cf25f75e6b37531675bc28f5b2acb95c0c1b7ead75f1a3a0eaa72e3ae3\": not found" Apr 21 04:16:22.786584 sshd[4834]: Connection closed by 10.0.0.1 port 47784 Apr 21 04:16:22.788649 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Apr 21 04:16:22.852639 systemd[1]: sshd@36-10.0.0.144:22-10.0.0.1:47784.service: Deactivated successfully. Apr 21 04:16:22.862055 kubelet[3069]: E0421 04:16:22.861746 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:16:22.897636 systemd[1]: session-37.scope: Deactivated successfully. Apr 21 04:16:22.907942 systemd[1]: session-37.scope: Consumed 1.440s CPU time, 25.4M memory peak. Apr 21 04:16:22.918532 systemd-logind[1564]: Session 37 logged out. Waiting for processes to exit. Apr 21 04:16:22.950822 systemd[1]: Started sshd@37-10.0.0.144:22-10.0.0.1:47788.service - OpenSSH per-connection server daemon (10.0.0.1:47788). Apr 21 04:16:22.961080 systemd-logind[1564]: Removed session 37. Apr 21 04:16:23.159394 sshd[4976]: Accepted publickey for core from 10.0.0.1 port 47788 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:16:23.189308 sshd-session[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:16:23.238336 systemd-logind[1564]: New session 38 of user core. Apr 21 04:16:23.252674 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 21 04:16:24.295598 sshd[4979]: Connection closed by 10.0.0.1 port 47788 Apr 21 04:16:24.299981 sshd-session[4976]: pam_unix(sshd:session): session closed for user core Apr 21 04:16:24.387948 systemd[1]: Started sshd@38-10.0.0.144:22-10.0.0.1:47798.service - OpenSSH per-connection server daemon (10.0.0.1:47798). Apr 21 04:16:24.434865 systemd[1]: sshd@37-10.0.0.144:22-10.0.0.1:47788.service: Deactivated successfully. Apr 21 04:16:24.445252 systemd[1]: session-38.scope: Deactivated successfully. Apr 21 04:16:24.451158 systemd-logind[1564]: Session 38 logged out. Waiting for processes to exit. Apr 21 04:16:24.473415 systemd-logind[1564]: Removed session 38. Apr 21 04:16:24.552063 kubelet[3069]: I0421 04:16:24.550082 3069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77" path="/var/lib/kubelet/pods/5d5a89ab-a873-4aaf-b4d2-1ce3236c8c77/volumes" Apr 21 04:16:24.693922 sshd[4988]: Accepted publickey for core from 10.0.0.1 port 47798 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:16:24.700150 systemd[1]: Created slice kubepods-besteffort-podd019eab0_8e1e_4b3c_b01f_024a42bf0509.slice - libcontainer container kubepods-besteffort-podd019eab0_8e1e_4b3c_b01f_024a42bf0509.slice. Apr 21 04:16:24.702676 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:16:24.724327 kubelet[3069]: I0421 04:16:24.714341 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr7sw\" (UniqueName: \"kubernetes.io/projected/d019eab0-8e1e-4b3c-b01f-024a42bf0509-kube-api-access-fr7sw\") pod \"cilium-operator-6c4d7847fc-64j7b\" (UID: \"d019eab0-8e1e-4b3c-b01f-024a42bf0509\") " pod="kube-system/cilium-operator-6c4d7847fc-64j7b" Apr 21 04:16:24.724327 kubelet[3069]: I0421 04:16:24.714927 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d019eab0-8e1e-4b3c-b01f-024a42bf0509-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-64j7b\" (UID: \"d019eab0-8e1e-4b3c-b01f-024a42bf0509\") " pod="kube-system/cilium-operator-6c4d7847fc-64j7b" Apr 21 04:16:24.739793 systemd-logind[1564]: New session 39 of user core. Apr 21 04:16:24.753316 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 21 04:16:24.788471 systemd[1]: Created slice kubepods-burstable-pod9d0d3008_e52f_44fc_859b_5f314cfaef82.slice - libcontainer container kubepods-burstable-pod9d0d3008_e52f_44fc_859b_5f314cfaef82.slice. Apr 21 04:16:24.820891 kubelet[3069]: I0421 04:16:24.819905 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-bpf-maps\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.820891 kubelet[3069]: I0421 04:16:24.819996 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9d0d3008-e52f-44fc-859b-5f314cfaef82-cilium-ipsec-secrets\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.820891 kubelet[3069]: I0421 04:16:24.820022 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnpdx\" (UniqueName: \"kubernetes.io/projected/9d0d3008-e52f-44fc-859b-5f314cfaef82-kube-api-access-jnpdx\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.820891 kubelet[3069]: I0421 04:16:24.820083 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-hostproc\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.820891 kubelet[3069]: I0421 04:16:24.820106 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-cni-path\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.820891 kubelet[3069]: I0421 04:16:24.820125 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-lib-modules\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.821871 kubelet[3069]: I0421 04:16:24.820146 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-host-proc-sys-net\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.821871 kubelet[3069]: I0421 04:16:24.820181 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-xtables-lock\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.821871 kubelet[3069]: I0421 04:16:24.820201 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d0d3008-e52f-44fc-859b-5f314cfaef82-hubble-tls\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.821871 kubelet[3069]: I0421 04:16:24.820221 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d0d3008-e52f-44fc-859b-5f314cfaef82-clustermesh-secrets\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.821871 kubelet[3069]: I0421 04:16:24.820241 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-host-proc-sys-kernel\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.821871 kubelet[3069]: I0421 04:16:24.820261 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-cilium-cgroup\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.822062 kubelet[3069]: I0421 04:16:24.820283 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-etc-cni-netd\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.822062 kubelet[3069]: I0421 04:16:24.820301 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d0d3008-e52f-44fc-859b-5f314cfaef82-cilium-run\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.822062 kubelet[3069]: I0421 04:16:24.820321 3069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d0d3008-e52f-44fc-859b-5f314cfaef82-cilium-config-path\") pod \"cilium-f8tnv\" (UID: \"9d0d3008-e52f-44fc-859b-5f314cfaef82\") " pod="kube-system/cilium-f8tnv" Apr 21 04:16:24.823782 sshd[4994]: Connection closed by 10.0.0.1 port 47798 Apr 21 04:16:24.833055 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Apr 21 04:16:24.851760 systemd[1]: sshd@38-10.0.0.144:22-10.0.0.1:47798.service: Deactivated successfully. Apr 21 04:16:24.855019 systemd[1]: session-39.scope: Deactivated successfully. Apr 21 04:16:24.857076 systemd-logind[1564]: Session 39 logged out. Waiting for processes to exit. Apr 21 04:16:24.870979 systemd[1]: Started sshd@39-10.0.0.144:22-10.0.0.1:55814.service - OpenSSH per-connection server daemon (10.0.0.1:55814). Apr 21 04:16:24.882141 systemd-logind[1564]: Removed session 39. Apr 21 04:16:25.038772 kubelet[3069]: E0421 04:16:25.038261 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:25.043625 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 55814 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 04:16:25.053394 containerd[1580]: time="2026-04-21T04:16:25.048382613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-64j7b,Uid:d019eab0-8e1e-4b3c-b01f-024a42bf0509,Namespace:kube-system,Attempt:0,}" Apr 21 04:16:25.051751 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 04:16:25.088292 systemd-logind[1564]: New session 40 of user core. Apr 21 04:16:25.096921 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 21 04:16:25.125798 kubelet[3069]: E0421 04:16:25.125127 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:25.132787 containerd[1580]: time="2026-04-21T04:16:25.132126262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f8tnv,Uid:9d0d3008-e52f-44fc-859b-5f314cfaef82,Namespace:kube-system,Attempt:0,}" Apr 21 04:16:25.156371 containerd[1580]: time="2026-04-21T04:16:25.156190483Z" level=info msg="connecting to shim 021eb2894f562c9ee3d15b955f3de09989dd76ed0730f036795b07f17cd20c37" address="unix:///run/containerd/s/98310418d0f06f870f7eb07a989d97022b5a9e38b696b7694edef45ca624d856" namespace=k8s.io protocol=ttrpc version=3 Apr 21 04:16:25.181953 containerd[1580]: time="2026-04-21T04:16:25.181881263Z" level=info msg="connecting to shim ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841" address="unix:///run/containerd/s/1eabc0f0c8eac9ea11526975aef6f974e0136f26e0a8bf385abf0aa9306e1ebd" namespace=k8s.io protocol=ttrpc version=3 Apr 21 04:16:25.224183 systemd[1]: Started cri-containerd-021eb2894f562c9ee3d15b955f3de09989dd76ed0730f036795b07f17cd20c37.scope - libcontainer container 021eb2894f562c9ee3d15b955f3de09989dd76ed0730f036795b07f17cd20c37. Apr 21 04:16:25.318044 systemd[1]: Started cri-containerd-ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841.scope - libcontainer container ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841. Apr 21 04:16:25.396650 containerd[1580]: time="2026-04-21T04:16:25.396391653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f8tnv,Uid:9d0d3008-e52f-44fc-859b-5f314cfaef82,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\"" Apr 21 04:16:25.410658 kubelet[3069]: E0421 04:16:25.410539 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:25.436014 containerd[1580]: time="2026-04-21T04:16:25.435888324Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 04:16:25.440456 containerd[1580]: time="2026-04-21T04:16:25.440401049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-64j7b,Uid:d019eab0-8e1e-4b3c-b01f-024a42bf0509,Namespace:kube-system,Attempt:0,} returns sandbox id \"021eb2894f562c9ee3d15b955f3de09989dd76ed0730f036795b07f17cd20c37\"" Apr 21 04:16:25.448765 kubelet[3069]: E0421 04:16:25.448477 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:25.466629 containerd[1580]: time="2026-04-21T04:16:25.464826677Z" level=info msg="Container d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:16:25.466629 containerd[1580]: time="2026-04-21T04:16:25.466193297Z" level=info msg="CreateContainer within sandbox \"021eb2894f562c9ee3d15b955f3de09989dd76ed0730f036795b07f17cd20c37\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 04:16:25.474537 containerd[1580]: time="2026-04-21T04:16:25.474470244Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93\"" Apr 21 04:16:25.477614 containerd[1580]: time="2026-04-21T04:16:25.476400318Z" level=info msg="StartContainer for \"d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93\"" Apr 21 04:16:25.485671 containerd[1580]: time="2026-04-21T04:16:25.485502618Z" level=info msg="connecting to shim d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93" address="unix:///run/containerd/s/1eabc0f0c8eac9ea11526975aef6f974e0136f26e0a8bf385abf0aa9306e1ebd" protocol=ttrpc version=3 Apr 21 04:16:25.490545 containerd[1580]: time="2026-04-21T04:16:25.490093600Z" level=info msg="Container d2190e14b8359c78ff0e935411806a669690ff027c926bca1137f1a1fac628b5: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:16:25.507240 containerd[1580]: time="2026-04-21T04:16:25.506624906Z" level=info msg="CreateContainer within sandbox \"021eb2894f562c9ee3d15b955f3de09989dd76ed0730f036795b07f17cd20c37\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d2190e14b8359c78ff0e935411806a669690ff027c926bca1137f1a1fac628b5\"" Apr 21 04:16:25.512067 containerd[1580]: time="2026-04-21T04:16:25.512003509Z" level=info msg="StartContainer for \"d2190e14b8359c78ff0e935411806a669690ff027c926bca1137f1a1fac628b5\"" Apr 21 04:16:25.517329 containerd[1580]: time="2026-04-21T04:16:25.517153661Z" level=info msg="connecting to shim d2190e14b8359c78ff0e935411806a669690ff027c926bca1137f1a1fac628b5" address="unix:///run/containerd/s/98310418d0f06f870f7eb07a989d97022b5a9e38b696b7694edef45ca624d856" protocol=ttrpc version=3 Apr 21 04:16:25.543177 systemd[1]: Started cri-containerd-d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93.scope - libcontainer container d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93. Apr 21 04:16:25.574871 systemd[1]: Started cri-containerd-d2190e14b8359c78ff0e935411806a669690ff027c926bca1137f1a1fac628b5.scope - libcontainer container d2190e14b8359c78ff0e935411806a669690ff027c926bca1137f1a1fac628b5. Apr 21 04:16:25.653181 containerd[1580]: time="2026-04-21T04:16:25.652138122Z" level=info msg="StartContainer for \"d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93\" returns successfully" Apr 21 04:16:25.653181 containerd[1580]: time="2026-04-21T04:16:25.653022756Z" level=info msg="StartContainer for \"d2190e14b8359c78ff0e935411806a669690ff027c926bca1137f1a1fac628b5\" returns successfully" Apr 21 04:16:25.655480 systemd[1]: cri-containerd-d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93.scope: Deactivated successfully. Apr 21 04:16:25.667943 containerd[1580]: time="2026-04-21T04:16:25.667642269Z" level=info msg="received container exit event container_id:\"d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93\" id:\"d0592d9ff4a374117251a999ce39c57e25fe724b11a4ec0f4d1444fdeb604c93\" pid:5132 exited_at:{seconds:1776744985 nanos:664095449}" Apr 21 04:16:26.162602 containerd[1580]: time="2026-04-21T04:16:26.162139465Z" level=warning msg="container event discarded" container=dbe267893b1e2b36dbf8f1d3a3c15a6de43e8ea22f9fa8d1da603f0aac175057 type=CONTAINER_CREATED_EVENT Apr 21 04:16:26.162602 containerd[1580]: time="2026-04-21T04:16:26.162474915Z" level=warning msg="container event discarded" container=dbe267893b1e2b36dbf8f1d3a3c15a6de43e8ea22f9fa8d1da603f0aac175057 type=CONTAINER_STARTED_EVENT Apr 21 04:16:26.298913 kubelet[3069]: E0421 04:16:26.298622 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:26.307626 kubelet[3069]: E0421 04:16:26.300724 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:26.329418 containerd[1580]: time="2026-04-21T04:16:26.329182135Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 04:16:26.351746 containerd[1580]: time="2026-04-21T04:16:26.350961098Z" level=info msg="Container 19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:16:26.361401 containerd[1580]: time="2026-04-21T04:16:26.361279964Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af\"" Apr 21 04:16:26.365830 containerd[1580]: time="2026-04-21T04:16:26.365750911Z" level=info msg="StartContainer for \"19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af\"" Apr 21 04:16:26.367098 containerd[1580]: time="2026-04-21T04:16:26.367045563Z" level=info msg="connecting to shim 19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af" address="unix:///run/containerd/s/1eabc0f0c8eac9ea11526975aef6f974e0136f26e0a8bf385abf0aa9306e1ebd" protocol=ttrpc version=3 Apr 21 04:16:26.457139 systemd[1]: Started cri-containerd-19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af.scope - libcontainer container 19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af. Apr 21 04:16:26.631509 containerd[1580]: time="2026-04-21T04:16:26.630866277Z" level=info msg="StartContainer for \"19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af\" returns successfully" Apr 21 04:16:26.646853 systemd[1]: cri-containerd-19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af.scope: Deactivated successfully. Apr 21 04:16:26.650090 containerd[1580]: time="2026-04-21T04:16:26.648215074Z" level=info msg="received container exit event container_id:\"19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af\" id:\"19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af\" pid:5198 exited_at:{seconds:1776744986 nanos:646531853}" Apr 21 04:16:26.729154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19f7b44c84a1943bca4b67aa8022e4dd7056763ed9878a8a9bdada4916d433af-rootfs.mount: Deactivated successfully. Apr 21 04:16:27.344525 kubelet[3069]: E0421 04:16:27.344065 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:27.387737 containerd[1580]: time="2026-04-21T04:16:27.387434867Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 04:16:27.467797 containerd[1580]: time="2026-04-21T04:16:27.467179108Z" level=info msg="Container e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:16:27.511970 kubelet[3069]: I0421 04:16:27.511340 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-64j7b" podStartSLOduration=3.511093483 podStartE2EDuration="3.511093483s" podCreationTimestamp="2026-04-21 04:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 04:16:26.51460264 +0000 UTC m=+363.939313157" watchObservedRunningTime="2026-04-21 04:16:27.511093483 +0000 UTC m=+364.935803982" Apr 21 04:16:27.518599 containerd[1580]: time="2026-04-21T04:16:27.518503505Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af\"" Apr 21 04:16:27.529059 containerd[1580]: time="2026-04-21T04:16:27.528524759Z" level=info msg="StartContainer for \"e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af\"" Apr 21 04:16:27.542733 containerd[1580]: time="2026-04-21T04:16:27.541778906Z" level=info msg="connecting to shim e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af" address="unix:///run/containerd/s/1eabc0f0c8eac9ea11526975aef6f974e0136f26e0a8bf385abf0aa9306e1ebd" protocol=ttrpc version=3 Apr 21 04:16:27.596995 systemd[1]: Started cri-containerd-e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af.scope - libcontainer container e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af. Apr 21 04:16:27.811962 containerd[1580]: time="2026-04-21T04:16:27.811904319Z" level=info msg="StartContainer for \"e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af\" returns successfully" Apr 21 04:16:27.813459 systemd[1]: cri-containerd-e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af.scope: Deactivated successfully. Apr 21 04:16:27.819734 containerd[1580]: time="2026-04-21T04:16:27.819537128Z" level=info msg="received container exit event container_id:\"e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af\" id:\"e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af\" pid:5242 exited_at:{seconds:1776744987 nanos:819197578}" Apr 21 04:16:27.873824 kubelet[3069]: E0421 04:16:27.870652 3069 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 04:16:27.900548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e06205c151f283871cd7b357eecdd94f98614efadb529d0ec7470fec2f7d24af-rootfs.mount: Deactivated successfully. Apr 21 04:16:28.404531 containerd[1580]: time="2026-04-21T04:16:28.399395640Z" level=warning msg="container event discarded" container=6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11 type=CONTAINER_CREATED_EVENT Apr 21 04:16:28.439237 containerd[1580]: time="2026-04-21T04:16:28.410888124Z" level=warning msg="container event discarded" container=6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11 type=CONTAINER_STARTED_EVENT Apr 21 04:16:28.505861 kubelet[3069]: E0421 04:16:28.504655 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:28.541754 containerd[1580]: time="2026-04-21T04:16:28.541075901Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 04:16:28.571840 containerd[1580]: time="2026-04-21T04:16:28.571606047Z" level=info msg="Container 9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:16:28.599126 containerd[1580]: time="2026-04-21T04:16:28.598886246Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea\"" Apr 21 04:16:28.602299 containerd[1580]: time="2026-04-21T04:16:28.602263518Z" level=info msg="StartContainer for \"9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea\"" Apr 21 04:16:28.606231 containerd[1580]: time="2026-04-21T04:16:28.606191640Z" level=info msg="connecting to shim 9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea" address="unix:///run/containerd/s/1eabc0f0c8eac9ea11526975aef6f974e0136f26e0a8bf385abf0aa9306e1ebd" protocol=ttrpc version=3 Apr 21 04:16:28.655268 systemd[1]: Started cri-containerd-9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea.scope - libcontainer container 9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea. Apr 21 04:16:28.774765 systemd[1]: cri-containerd-9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea.scope: Deactivated successfully. Apr 21 04:16:28.781880 containerd[1580]: time="2026-04-21T04:16:28.781809311Z" level=info msg="received container exit event container_id:\"9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea\" id:\"9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea\" pid:5283 exited_at:{seconds:1776744988 nanos:774468667}" Apr 21 04:16:28.797383 containerd[1580]: time="2026-04-21T04:16:28.797338328Z" level=info msg="StartContainer for \"9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea\" returns successfully" Apr 21 04:16:28.817134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a22ebed221d75d1acfeab26ddfc017656b77942a24e4f08b2da632f0bca44ea-rootfs.mount: Deactivated successfully. Apr 21 04:16:29.126997 containerd[1580]: time="2026-04-21T04:16:29.125865882Z" level=warning msg="container event discarded" container=f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875 type=CONTAINER_CREATED_EVENT Apr 21 04:16:29.534649 kubelet[3069]: E0421 04:16:29.503672 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:29.586994 containerd[1580]: time="2026-04-21T04:16:29.585998517Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 04:16:29.645375 containerd[1580]: time="2026-04-21T04:16:29.643536899Z" level=info msg="Container 7bf0a78a2cd93325ba3bb0668020f7a6bd28f531ff263f354262670c74fd885e: CDI devices from CRI Config.CDIDevices: []" Apr 21 04:16:29.662241 containerd[1580]: time="2026-04-21T04:16:29.661819176Z" level=info msg="CreateContainer within sandbox \"ea5cc5d365c251129dd8bbc467822011dd5d8c477dfb05df5c531208db8bf841\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7bf0a78a2cd93325ba3bb0668020f7a6bd28f531ff263f354262670c74fd885e\"" Apr 21 04:16:29.671736 containerd[1580]: time="2026-04-21T04:16:29.669673455Z" level=info msg="StartContainer for \"7bf0a78a2cd93325ba3bb0668020f7a6bd28f531ff263f354262670c74fd885e\"" Apr 21 04:16:29.672824 containerd[1580]: time="2026-04-21T04:16:29.672742040Z" level=info msg="connecting to shim 7bf0a78a2cd93325ba3bb0668020f7a6bd28f531ff263f354262670c74fd885e" address="unix:///run/containerd/s/1eabc0f0c8eac9ea11526975aef6f974e0136f26e0a8bf385abf0aa9306e1ebd" protocol=ttrpc version=3 Apr 21 04:16:29.735086 systemd[1]: Started cri-containerd-7bf0a78a2cd93325ba3bb0668020f7a6bd28f531ff263f354262670c74fd885e.scope - libcontainer container 7bf0a78a2cd93325ba3bb0668020f7a6bd28f531ff263f354262670c74fd885e. Apr 21 04:16:29.879609 containerd[1580]: time="2026-04-21T04:16:29.878730740Z" level=info msg="StartContainer for \"7bf0a78a2cd93325ba3bb0668020f7a6bd28f531ff263f354262670c74fd885e\" returns successfully" Apr 21 04:16:30.516371 kubelet[3069]: E0421 04:16:30.515817 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:30.668860 kubelet[3069]: I0421 04:16:30.668479 3069 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-21T04:16:30Z","lastTransitionTime":"2026-04-21T04:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 21 04:16:30.680223 kubelet[3069]: I0421 04:16:30.680119 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f8tnv" podStartSLOduration=6.679929614 podStartE2EDuration="6.679929614s" podCreationTimestamp="2026-04-21 04:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 04:16:30.678026401 +0000 UTC m=+368.102736918" watchObservedRunningTime="2026-04-21 04:16:30.679929614 +0000 UTC m=+368.104640123" Apr 21 04:16:30.788737 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Apr 21 04:16:31.523797 kubelet[3069]: E0421 04:16:31.523476 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:32.629038 containerd[1580]: time="2026-04-21T04:16:32.628321851Z" level=warning msg="container event discarded" container=0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe type=CONTAINER_CREATED_EVENT Apr 21 04:16:32.629038 containerd[1580]: time="2026-04-21T04:16:32.628866375Z" level=warning msg="container event discarded" container=0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe type=CONTAINER_STARTED_EVENT Apr 21 04:16:34.039788 containerd[1580]: time="2026-04-21T04:16:34.039440872Z" level=info msg="StopPodSandbox for \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\"" Apr 21 04:16:34.045781 containerd[1580]: time="2026-04-21T04:16:34.040587454Z" level=info msg="TearDown network for sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" successfully" Apr 21 04:16:34.045781 containerd[1580]: time="2026-04-21T04:16:34.040616930Z" level=info msg="StopPodSandbox for \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" returns successfully" Apr 21 04:16:34.045781 containerd[1580]: time="2026-04-21T04:16:34.042088710Z" level=info msg="RemovePodSandbox for \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\"" Apr 21 04:16:34.045781 containerd[1580]: time="2026-04-21T04:16:34.042144889Z" level=info msg="Forcibly stopping sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\"" Apr 21 04:16:34.045781 containerd[1580]: time="2026-04-21T04:16:34.042412493Z" level=info msg="TearDown network for sandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" successfully" Apr 21 04:16:34.045781 containerd[1580]: time="2026-04-21T04:16:34.045632201Z" level=info msg="Ensure that sandbox 6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11 in task-service has been cleanup successfully" Apr 21 04:16:34.102752 containerd[1580]: time="2026-04-21T04:16:34.100980484Z" level=info msg="RemovePodSandbox \"6adaeb1e391a135e96cd592ad7726f124c989c2727fc6328641a3573e1a23e11\" returns successfully" Apr 21 04:16:34.116885 containerd[1580]: time="2026-04-21T04:16:34.116161966Z" level=info msg="StopPodSandbox for \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\"" Apr 21 04:16:34.123009 containerd[1580]: time="2026-04-21T04:16:34.119049032Z" level=info msg="TearDown network for sandbox \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" successfully" Apr 21 04:16:34.123009 containerd[1580]: time="2026-04-21T04:16:34.119097095Z" level=info msg="StopPodSandbox for \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" returns successfully" Apr 21 04:16:34.131573 containerd[1580]: time="2026-04-21T04:16:34.127954718Z" level=info msg="RemovePodSandbox for \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\"" Apr 21 04:16:34.131573 containerd[1580]: time="2026-04-21T04:16:34.128229423Z" level=info msg="Forcibly stopping sandbox \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\"" Apr 21 04:16:34.136354 containerd[1580]: time="2026-04-21T04:16:34.136147707Z" level=info msg="TearDown network for sandbox \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" successfully" Apr 21 04:16:34.142547 containerd[1580]: time="2026-04-21T04:16:34.142451140Z" level=info msg="Ensure that sandbox 0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe in task-service has been cleanup successfully" Apr 21 04:16:34.151844 containerd[1580]: time="2026-04-21T04:16:34.151441192Z" level=info msg="RemovePodSandbox \"0b1effaa351d3a20a1b6e534aacb2a564570378b059c26c1b9d6b343c30e04fe\" returns successfully" Apr 21 04:16:37.890475 systemd-networkd[1483]: lxc_health: Link UP Apr 21 04:16:37.908135 systemd-networkd[1483]: lxc_health: Gained carrier Apr 21 04:16:39.155127 kubelet[3069]: E0421 04:16:39.154606 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:39.726751 kubelet[3069]: E0421 04:16:39.725879 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:39.818513 systemd-networkd[1483]: lxc_health: Gained IPv6LL Apr 21 04:16:40.879916 kubelet[3069]: E0421 04:16:40.879681 3069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 04:16:40.883310 containerd[1580]: time="2026-04-21T04:16:40.882002210Z" level=warning msg="container event discarded" container=f884e0d52742c409759d5eb65923d62053f0cb50758d7d0b0b670eacc68eb875 type=CONTAINER_STARTED_EVENT Apr 21 04:16:42.904276 sshd[5010]: Connection closed by 10.0.0.1 port 55814 Apr 21 04:16:42.905752 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Apr 21 04:16:42.996766 systemd[1]: sshd@39-10.0.0.144:22-10.0.0.1:55814.service: Deactivated successfully. Apr 21 04:16:43.036749 systemd[1]: session-40.scope: Deactivated successfully. Apr 21 04:16:43.039634 systemd-logind[1564]: Session 40 logged out. Waiting for processes to exit. Apr 21 04:16:43.050044 systemd-logind[1564]: Removed session 40.