Apr 17 02:41:37.154305 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 17 02:41:37.154329 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 02:41:37.154383 kernel: BIOS-provided physical RAM map: Apr 17 02:41:37.154388 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 17 02:41:37.154394 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 17 02:41:37.154399 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 02:41:37.154404 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 17 02:41:37.154408 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 17 02:41:37.154413 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 02:41:37.154417 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 02:41:37.154423 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 02:41:37.154431 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 02:41:37.154436 kernel: NX (Execute Disable) protection: active Apr 17 02:41:37.154440 kernel: APIC: Static calls initialized Apr 17 02:41:37.154446 kernel: SMBIOS 2.8 present. Apr 17 02:41:37.154451 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 17 02:41:37.154457 kernel: DMI: Memory slots populated: 1/1 Apr 17 02:41:37.154462 kernel: Hypervisor detected: KVM Apr 17 02:41:37.154467 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 02:41:37.154471 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 02:41:37.154476 kernel: kvm-clock: using sched offset of 9992231963 cycles Apr 17 02:41:37.154482 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 02:41:37.154487 kernel: tsc: Detected 2793.438 MHz processor Apr 17 02:41:37.154495 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 02:41:37.154503 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 02:41:37.154511 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 02:41:37.154520 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 02:41:37.154527 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 02:41:37.154535 kernel: Using GB pages for direct mapping Apr 17 02:41:37.154544 kernel: ACPI: Early table checksum verification disabled Apr 17 02:41:37.154552 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 17 02:41:37.154561 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:41:37.154570 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:41:37.154579 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:41:37.154587 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 17 02:41:37.154597 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:41:37.154644 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:41:37.154653 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:41:37.154658 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:41:37.154663 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 17 02:41:37.154673 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 17 02:41:37.154681 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 17 02:41:37.154686 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 17 02:41:37.154691 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 17 02:41:37.154696 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 17 02:41:37.154701 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 17 02:41:37.154706 kernel: No NUMA configuration found Apr 17 02:41:37.154711 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 17 02:41:37.154716 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 17 02:41:37.154723 kernel: Zone ranges: Apr 17 02:41:37.154729 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 02:41:37.154736 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 17 02:41:37.154745 kernel: Normal empty Apr 17 02:41:37.154752 kernel: Device empty Apr 17 02:41:37.154759 kernel: Movable zone start for each node Apr 17 02:41:37.154767 kernel: Early memory node ranges Apr 17 02:41:37.154776 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 02:41:37.154785 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 17 02:41:37.154794 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 17 02:41:37.154805 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 02:41:37.154814 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 02:41:37.154823 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 17 02:41:37.154832 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 02:41:37.154841 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 02:41:37.154849 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 02:41:37.154858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 02:41:37.154867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 02:41:37.154876 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 02:41:37.154887 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 02:41:37.154896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 02:41:37.154904 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 02:41:37.154911 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 02:41:37.154919 kernel: TSC deadline timer available Apr 17 02:41:37.154928 kernel: CPU topo: Max. logical packages: 1 Apr 17 02:41:37.154936 kernel: CPU topo: Max. logical dies: 1 Apr 17 02:41:37.154944 kernel: CPU topo: Max. dies per package: 1 Apr 17 02:41:37.154953 kernel: CPU topo: Max. threads per core: 1 Apr 17 02:41:37.154964 kernel: CPU topo: Num. cores per package: 4 Apr 17 02:41:37.154973 kernel: CPU topo: Num. threads per package: 4 Apr 17 02:41:37.154982 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 17 02:41:37.154989 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 02:41:37.154997 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 02:41:37.155006 kernel: kvm-guest: setup PV sched yield Apr 17 02:41:37.155013 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 02:41:37.155021 kernel: Booting paravirtualized kernel on KVM Apr 17 02:41:37.155029 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 02:41:37.155039 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 02:41:37.155050 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 17 02:41:37.155058 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 17 02:41:37.155065 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 02:41:37.155075 kernel: kvm-guest: PV spinlocks enabled Apr 17 02:41:37.155082 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 02:41:37.155091 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 02:41:37.155102 kernel: random: crng init done Apr 17 02:41:37.155112 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 02:41:37.155124 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 02:41:37.155133 kernel: Fallback order for Node 0: 0 Apr 17 02:41:37.155141 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 17 02:41:37.155150 kernel: Policy zone: DMA32 Apr 17 02:41:37.155159 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 02:41:37.155167 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 02:41:37.155176 kernel: ftrace: allocating 40126 entries in 157 pages Apr 17 02:41:37.155314 kernel: ftrace: allocated 157 pages with 5 groups Apr 17 02:41:37.155325 kernel: Dynamic Preempt: voluntary Apr 17 02:41:37.155339 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 02:41:37.155354 kernel: rcu: RCU event tracing is enabled. Apr 17 02:41:37.155362 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 02:41:37.155371 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 02:41:37.155380 kernel: Rude variant of Tasks RCU enabled. Apr 17 02:41:37.155389 kernel: Tracing variant of Tasks RCU enabled. Apr 17 02:41:37.155399 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 02:41:37.155406 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 02:41:37.155415 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 02:41:37.155426 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 02:41:37.155434 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 02:41:37.155444 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 02:41:37.155451 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 02:41:37.155460 kernel: Console: colour VGA+ 80x25 Apr 17 02:41:37.155475 kernel: printk: legacy console [ttyS0] enabled Apr 17 02:41:37.155485 kernel: ACPI: Core revision 20240827 Apr 17 02:41:37.155495 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 02:41:37.155505 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 02:41:37.155512 kernel: x2apic enabled Apr 17 02:41:37.155522 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 02:41:37.155529 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 02:41:37.155542 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 02:41:37.155553 kernel: kvm-guest: setup PV IPIs Apr 17 02:41:37.155563 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 02:41:37.155574 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 02:41:37.155584 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 02:41:37.155595 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 02:41:37.155603 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 02:41:37.155639 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 02:41:37.155647 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 02:41:37.155657 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 02:41:37.155668 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 02:41:37.155678 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 02:41:37.155689 kernel: RETBleed: Vulnerable Apr 17 02:41:37.155701 kernel: Speculative Store Bypass: Vulnerable Apr 17 02:41:37.155710 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 02:41:37.155719 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 02:41:37.155728 kernel: active return thunk: its_return_thunk Apr 17 02:41:37.155735 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 02:41:37.155746 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 02:41:37.155756 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 02:41:37.155767 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 02:41:37.155777 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 02:41:37.155789 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 02:41:37.155800 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 02:41:37.155807 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 02:41:37.155817 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 02:41:37.155825 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 02:41:37.155835 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 02:41:37.155846 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 02:41:37.155856 kernel: Freeing SMP alternatives memory: 32K Apr 17 02:41:37.155866 kernel: pid_max: default: 32768 minimum: 301 Apr 17 02:41:37.155878 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 17 02:41:37.155886 kernel: landlock: Up and running. Apr 17 02:41:37.155895 kernel: SELinux: Initializing. Apr 17 02:41:37.155904 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 02:41:37.155912 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 02:41:37.155923 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 02:41:37.155933 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 02:41:37.155941 kernel: signal: max sigframe size: 3632 Apr 17 02:41:37.155950 kernel: rcu: Hierarchical SRCU implementation. Apr 17 02:41:37.155960 kernel: rcu: Max phase no-delay instances is 400. Apr 17 02:41:37.155971 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 17 02:41:37.155982 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 02:41:37.155992 kernel: smp: Bringing up secondary CPUs ... Apr 17 02:41:37.156002 kernel: smpboot: x86: Booting SMP configuration: Apr 17 02:41:37.156012 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 02:41:37.156021 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 02:41:37.156030 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 02:41:37.156041 kernel: Memory: 2419756K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 146108K reserved, 0K cma-reserved) Apr 17 02:41:37.156053 kernel: devtmpfs: initialized Apr 17 02:41:37.156064 kernel: x86/mm: Memory block size: 128MB Apr 17 02:41:37.156074 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 02:41:37.156084 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 02:41:37.156093 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 02:41:37.156101 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 02:41:37.156110 kernel: audit: initializing netlink subsys (disabled) Apr 17 02:41:37.156118 kernel: audit: type=2000 audit(1776393692.760:1): state=initialized audit_enabled=0 res=1 Apr 17 02:41:37.156126 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 02:41:37.156137 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 02:41:37.156145 kernel: cpuidle: using governor menu Apr 17 02:41:37.156154 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 02:41:37.156163 kernel: dca service started, version 1.12.1 Apr 17 02:41:37.156172 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 17 02:41:37.156308 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 02:41:37.156320 kernel: PCI: Using configuration type 1 for base access Apr 17 02:41:37.156329 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 02:41:37.156337 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 02:41:37.156351 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 02:41:37.156360 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 02:41:37.156369 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 02:41:37.156379 kernel: ACPI: Added _OSI(Module Device) Apr 17 02:41:37.156388 kernel: ACPI: Added _OSI(Processor Device) Apr 17 02:41:37.156398 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 02:41:37.156409 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 02:41:37.156419 kernel: ACPI: Interpreter enabled Apr 17 02:41:37.156430 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 02:41:37.156571 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 02:41:37.156583 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 02:41:37.156593 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 02:41:37.156602 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 02:41:37.156651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 02:41:37.156892 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 02:41:37.156982 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 02:41:37.157775 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 02:41:37.157798 kernel: PCI host bridge to bus 0000:00 Apr 17 02:41:37.157883 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 02:41:37.157955 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 02:41:37.158026 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 02:41:37.158096 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 02:41:37.158169 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 02:41:37.158670 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 17 02:41:37.158745 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 02:41:37.158844 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 17 02:41:37.158926 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 17 02:41:37.159006 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 17 02:41:37.159083 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 17 02:41:37.159161 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 17 02:41:37.159292 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 02:41:37.159385 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 17 02:41:37.159469 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 17 02:41:37.159552 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 17 02:41:37.159663 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 02:41:37.159755 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 17 02:41:37.159836 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 17 02:41:37.159926 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 17 02:41:37.160010 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 02:41:37.160097 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 17 02:41:37.160176 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 17 02:41:37.160557 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 17 02:41:37.160674 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 17 02:41:37.160761 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 17 02:41:37.160861 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 17 02:41:37.160944 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 02:41:37.161033 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 17 02:41:37.161139 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 17 02:41:37.161238 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 17 02:41:37.161328 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 17 02:41:37.161390 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 17 02:41:37.161398 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 02:41:37.161404 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 02:41:37.161410 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 02:41:37.161415 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 02:41:37.161421 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 02:41:37.161427 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 02:41:37.161433 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 02:41:37.161443 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 02:41:37.161452 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 02:41:37.161460 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 02:41:37.161470 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 02:41:37.161480 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 02:41:37.161489 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 02:41:37.161498 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 02:41:37.161510 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 02:41:37.161520 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 02:41:37.161531 kernel: iommu: Default domain type: Translated Apr 17 02:41:37.161540 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 02:41:37.161549 kernel: PCI: Using ACPI for IRQ routing Apr 17 02:41:37.161557 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 02:41:37.161567 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 17 02:41:37.161578 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 17 02:41:37.161694 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 02:41:37.161775 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 02:41:37.161858 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 02:41:37.161872 kernel: vgaarb: loaded Apr 17 02:41:37.161881 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 02:41:37.161889 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 02:41:37.161899 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 02:41:37.161907 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 02:41:37.161916 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 02:41:37.161924 kernel: pnp: PnP ACPI init Apr 17 02:41:37.162012 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 02:41:37.162029 kernel: pnp: PnP ACPI: found 6 devices Apr 17 02:41:37.162040 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 02:41:37.162049 kernel: NET: Registered PF_INET protocol family Apr 17 02:41:37.162059 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 02:41:37.162065 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 02:41:37.162071 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 02:41:37.162077 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 02:41:37.162083 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 02:41:37.162089 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 02:41:37.162097 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 02:41:37.162103 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 02:41:37.162108 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 02:41:37.162114 kernel: NET: Registered PF_XDP protocol family Apr 17 02:41:37.162168 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 02:41:37.162260 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 02:41:37.162309 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 02:41:37.162372 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 02:41:37.162445 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 02:41:37.162517 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 17 02:41:37.162528 kernel: PCI: CLS 0 bytes, default 64 Apr 17 02:41:37.162538 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 02:41:37.162548 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 02:41:37.162556 kernel: Initialise system trusted keyrings Apr 17 02:41:37.162565 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 02:41:37.162573 kernel: Key type asymmetric registered Apr 17 02:41:37.162582 kernel: Asymmetric key parser 'x509' registered Apr 17 02:41:37.162594 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 02:41:37.162603 kernel: io scheduler mq-deadline registered Apr 17 02:41:37.162638 kernel: io scheduler kyber registered Apr 17 02:41:37.162646 kernel: io scheduler bfq registered Apr 17 02:41:37.162657 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 02:41:37.162668 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 02:41:37.162679 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 02:41:37.162689 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 02:41:37.162698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 02:41:37.162709 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 02:41:37.162717 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 02:41:37.162728 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 02:41:37.162739 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 02:41:37.162857 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 02:41:37.162873 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 02:41:37.163036 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 02:41:37.163115 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T02:41:36 UTC (1776393696) Apr 17 02:41:37.163272 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 02:41:37.163286 kernel: intel_pstate: CPU model not supported Apr 17 02:41:37.163296 kernel: NET: Registered PF_INET6 protocol family Apr 17 02:41:37.163325 kernel: Segment Routing with IPv6 Apr 17 02:41:37.163336 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 02:41:37.163347 kernel: NET: Registered PF_PACKET protocol family Apr 17 02:41:37.163358 kernel: Key type dns_resolver registered Apr 17 02:41:37.163367 kernel: IPI shorthand broadcast: enabled Apr 17 02:41:37.163377 kernel: sched_clock: Marking stable (4062048964, 341615782)->(4548549625, -144884879) Apr 17 02:41:37.163390 kernel: registered taskstats version 1 Apr 17 02:41:37.163400 kernel: Loading compiled-in X.509 certificates Apr 17 02:41:37.163409 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 17 02:41:37.163417 kernel: Demotion targets for Node 0: null Apr 17 02:41:37.163423 kernel: Key type .fscrypt registered Apr 17 02:41:37.163428 kernel: Key type fscrypt-provisioning registered Apr 17 02:41:37.163434 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 02:41:37.163440 kernel: ima: Allocated hash algorithm: sha1 Apr 17 02:41:37.163446 kernel: ima: No architecture policies found Apr 17 02:41:37.163453 kernel: clk: Disabling unused clocks Apr 17 02:41:37.163459 kernel: Warning: unable to open an initial console. Apr 17 02:41:37.163465 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 17 02:41:37.163471 kernel: Write protecting the kernel read-only data: 40960k Apr 17 02:41:37.163477 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 17 02:41:37.163483 kernel: Run /init as init process Apr 17 02:41:37.163488 kernel: with arguments: Apr 17 02:41:37.163494 kernel: /init Apr 17 02:41:37.163500 kernel: with environment: Apr 17 02:41:37.163507 kernel: HOME=/ Apr 17 02:41:37.163512 kernel: TERM=linux Apr 17 02:41:37.163520 systemd[1]: Successfully made /usr/ read-only. Apr 17 02:41:37.163546 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 02:41:37.163567 systemd[1]: Detected virtualization kvm. Apr 17 02:41:37.163573 systemd[1]: Detected architecture x86-64. Apr 17 02:41:37.163587 systemd[1]: Running in initrd. Apr 17 02:41:37.163596 systemd[1]: No hostname configured, using default hostname. Apr 17 02:41:37.163602 systemd[1]: Hostname set to . Apr 17 02:41:37.163630 systemd[1]: Initializing machine ID from VM UUID. Apr 17 02:41:37.163636 systemd[1]: Queued start job for default target initrd.target. Apr 17 02:41:37.163642 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 02:41:37.163649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 02:41:37.163655 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 02:41:37.163664 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 02:41:37.163670 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 02:41:37.163677 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 02:41:37.163684 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 02:41:37.163691 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 02:41:37.163697 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 02:41:37.163705 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 02:41:37.163711 systemd[1]: Reached target paths.target - Path Units. Apr 17 02:41:37.163718 systemd[1]: Reached target slices.target - Slice Units. Apr 17 02:41:37.163724 systemd[1]: Reached target swap.target - Swaps. Apr 17 02:41:37.163730 systemd[1]: Reached target timers.target - Timer Units. Apr 17 02:41:37.163736 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 02:41:37.163743 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 02:41:37.163749 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 02:41:37.163756 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 17 02:41:37.163763 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 02:41:37.163770 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 02:41:37.163776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 02:41:37.163782 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 02:41:37.163788 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 02:41:37.163798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 02:41:37.163804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 02:41:37.163810 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 17 02:41:37.163817 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 02:41:37.163823 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 02:41:37.163829 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 02:41:37.163836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:41:37.163842 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 02:41:37.163850 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 02:41:37.163903 systemd-journald[201]: Collecting audit messages is disabled. Apr 17 02:41:37.163932 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 02:41:37.163942 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 02:41:37.163953 systemd-journald[201]: Journal started Apr 17 02:41:37.163976 systemd-journald[201]: Runtime Journal (/run/log/journal/edb58fd497c545c889807e41819ff1c4) is 6M, max 48.2M, 42.2M free. Apr 17 02:41:37.163597 systemd-modules-load[203]: Inserted module 'overlay' Apr 17 02:41:37.178657 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 02:41:37.185793 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 02:41:37.188848 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 02:41:37.198750 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 02:41:37.208544 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 02:41:37.212047 systemd-modules-load[203]: Inserted module 'br_netfilter' Apr 17 02:41:37.214868 kernel: Bridge firewalling registered Apr 17 02:41:37.217488 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 17 02:41:37.223383 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 02:41:37.226022 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 02:41:37.371313 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 02:41:37.378960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:41:37.390168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 02:41:37.398024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 02:41:37.425797 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 02:41:37.428177 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 02:41:37.494844 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 02:41:37.499508 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 02:41:37.543489 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 02:41:37.548966 systemd-resolved[235]: Positive Trust Anchors: Apr 17 02:41:37.548973 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 02:41:37.548996 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 02:41:37.554846 systemd-resolved[235]: Defaulting to hostname 'linux'. Apr 17 02:41:37.557381 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 02:41:37.559777 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 02:41:37.688310 kernel: SCSI subsystem initialized Apr 17 02:41:37.704552 kernel: Loading iSCSI transport class v2.0-870. Apr 17 02:41:37.721276 kernel: iscsi: registered transport (tcp) Apr 17 02:41:37.765569 kernel: iscsi: registered transport (qla4xxx) Apr 17 02:41:37.765649 kernel: QLogic iSCSI HBA Driver Apr 17 02:41:37.821652 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 02:41:37.856890 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 02:41:37.862094 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 02:41:38.012535 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 02:41:38.016745 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 02:41:38.141487 kernel: raid6: avx512x4 gen() 27878 MB/s Apr 17 02:41:38.160397 kernel: raid6: avx512x2 gen() 31604 MB/s Apr 17 02:41:38.178416 kernel: raid6: avx512x1 gen() 30325 MB/s Apr 17 02:41:38.196381 kernel: raid6: avx2x4 gen() 20446 MB/s Apr 17 02:41:38.214482 kernel: raid6: avx2x2 gen() 21660 MB/s Apr 17 02:41:38.234497 kernel: raid6: avx2x1 gen() 15959 MB/s Apr 17 02:41:38.234573 kernel: raid6: using algorithm avx512x2 gen() 31604 MB/s Apr 17 02:41:38.254999 kernel: raid6: .... xor() 19025 MB/s, rmw enabled Apr 17 02:41:38.255062 kernel: raid6: using avx512x2 recovery algorithm Apr 17 02:41:38.282480 kernel: xor: automatically using best checksumming function avx Apr 17 02:41:38.578770 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 02:41:38.593591 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 02:41:38.599500 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 02:41:38.651538 systemd-udevd[453]: Using default interface naming scheme 'v255'. Apr 17 02:41:38.658097 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 02:41:38.661560 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 02:41:38.708067 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Apr 17 02:41:38.774054 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 02:41:38.775752 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 02:41:38.877714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 02:41:38.893554 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 02:41:38.991928 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 02:41:38.991990 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 02:41:39.164905 kernel: hrtimer: interrupt took 8301021 ns Apr 17 02:41:39.205298 kernel: AES CTR mode by8 optimization enabled Apr 17 02:41:39.214772 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 02:41:39.227586 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 02:41:39.227677 kernel: GPT:9289727 != 19775487 Apr 17 02:41:39.227693 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 02:41:39.240418 kernel: GPT:9289727 != 19775487 Apr 17 02:41:39.242272 kernel: libata version 3.00 loaded. Apr 17 02:41:39.242290 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 02:41:39.242302 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 02:41:39.278460 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 17 02:41:39.278546 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 02:41:39.278688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:41:39.288986 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 02:41:39.289399 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 02:41:39.288386 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:41:39.304528 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 17 02:41:39.306845 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 17 02:41:39.306977 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 02:41:39.304371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:41:39.310495 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 02:41:39.326276 kernel: scsi host0: ahci Apr 17 02:41:39.331334 kernel: scsi host1: ahci Apr 17 02:41:39.333282 kernel: scsi host2: ahci Apr 17 02:41:39.333878 kernel: scsi host3: ahci Apr 17 02:41:39.338260 kernel: scsi host4: ahci Apr 17 02:41:39.340266 kernel: scsi host5: ahci Apr 17 02:41:39.344413 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Apr 17 02:41:39.344450 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Apr 17 02:41:39.344459 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Apr 17 02:41:39.344466 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Apr 17 02:41:39.344477 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Apr 17 02:41:39.344493 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Apr 17 02:41:39.355541 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 02:41:39.364475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 02:41:39.620287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 02:41:39.627541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:41:39.646888 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 02:41:39.647096 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 02:41:39.658562 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 02:41:39.675390 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 02:41:39.685561 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 02:41:39.688412 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 02:41:39.690330 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 02:41:39.690380 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 02:41:39.692090 disk-uuid[640]: Primary Header is updated. Apr 17 02:41:39.692090 disk-uuid[640]: Secondary Entries is updated. Apr 17 02:41:39.692090 disk-uuid[640]: Secondary Header is updated. Apr 17 02:41:39.712073 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 02:41:39.712670 kernel: ata3.00: applying bridge limits Apr 17 02:41:39.713129 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 02:41:39.713154 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 02:41:39.713161 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 02:41:39.713169 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 02:41:39.713176 kernel: ata3.00: configured for UDMA/100 Apr 17 02:41:39.713359 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 02:41:39.713465 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 02:41:39.758735 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 02:41:39.758954 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 02:41:39.775357 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 02:41:40.167985 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 02:41:40.174895 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 02:41:40.178339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 02:41:40.184867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 02:41:40.193655 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 02:41:40.227055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 02:41:40.711478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 02:41:40.713254 disk-uuid[641]: The operation has completed successfully. Apr 17 02:41:40.751999 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 02:41:40.752417 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 02:41:40.790922 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 02:41:40.823839 sh[670]: Success Apr 17 02:41:40.853436 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 02:41:40.853517 kernel: device-mapper: uevent: version 1.0.3 Apr 17 02:41:40.856481 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 17 02:41:40.869229 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 02:41:40.984490 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 02:41:40.992575 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 02:41:41.021990 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 02:41:41.034767 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (682) Apr 17 02:41:41.034790 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 17 02:41:41.034798 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 02:41:41.049944 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 17 02:41:41.050007 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 17 02:41:41.051728 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 02:41:41.052323 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 17 02:41:41.059084 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 02:41:41.059910 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 02:41:41.068671 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 02:41:41.126276 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (714) Apr 17 02:41:41.132131 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:41:41.132283 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 02:41:41.139750 kernel: BTRFS info (device vda6): turning on async discard Apr 17 02:41:41.139856 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 02:41:41.148266 kernel: BTRFS info (device vda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:41:41.149444 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 02:41:41.155162 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 02:41:41.577121 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 02:41:41.585548 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 02:41:41.698517 ignition[766]: Ignition 2.22.0 Apr 17 02:41:41.698542 ignition[766]: Stage: fetch-offline Apr 17 02:41:41.698608 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 17 02:41:41.698615 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:41:41.698758 ignition[766]: parsed url from cmdline: "" Apr 17 02:41:41.698760 ignition[766]: no config URL provided Apr 17 02:41:41.698764 ignition[766]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 02:41:41.698769 ignition[766]: no config at "/usr/lib/ignition/user.ign" Apr 17 02:41:41.698801 ignition[766]: op(1): [started] loading QEMU firmware config module Apr 17 02:41:41.698804 ignition[766]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 02:41:41.720853 systemd-networkd[857]: lo: Link UP Apr 17 02:41:41.720911 systemd-networkd[857]: lo: Gained carrier Apr 17 02:41:41.727407 systemd-networkd[857]: Enumeration completed Apr 17 02:41:41.726921 ignition[766]: op(1): [finished] loading QEMU firmware config module Apr 17 02:41:41.727692 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 02:41:41.727958 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 02:41:41.727961 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 02:41:41.730895 systemd-networkd[857]: eth0: Link UP Apr 17 02:41:41.730999 systemd-networkd[857]: eth0: Gained carrier Apr 17 02:41:41.731012 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 02:41:41.731033 systemd[1]: Reached target network.target - Network. Apr 17 02:41:41.797458 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 02:41:41.983479 ignition[766]: parsing config with SHA512: cc0dcbc5ad85c9929a830668ed8ee209f5ecea3ac8df0d7693a2ba78730ad4faf5cbb644bab68580cc01ae87661741f52616271f63cb109e5a9515f90a752734 Apr 17 02:41:41.999307 unknown[766]: fetched base config from "system" Apr 17 02:41:42.002580 unknown[766]: fetched user config from "qemu" Apr 17 02:41:42.007260 ignition[766]: fetch-offline: fetch-offline passed Apr 17 02:41:42.012357 ignition[766]: Ignition finished successfully Apr 17 02:41:42.016735 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 02:41:42.024466 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 02:41:42.025618 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 02:41:42.195972 ignition[866]: Ignition 2.22.0 Apr 17 02:41:42.197034 ignition[866]: Stage: kargs Apr 17 02:41:42.197444 ignition[866]: no configs at "/usr/lib/ignition/base.d" Apr 17 02:41:42.197456 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:41:42.201357 ignition[866]: kargs: kargs passed Apr 17 02:41:42.201485 ignition[866]: Ignition finished successfully Apr 17 02:41:42.219674 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 02:41:42.286386 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 02:41:42.434533 ignition[874]: Ignition 2.22.0 Apr 17 02:41:42.434558 ignition[874]: Stage: disks Apr 17 02:41:42.434707 ignition[874]: no configs at "/usr/lib/ignition/base.d" Apr 17 02:41:42.434716 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:41:42.444595 ignition[874]: disks: disks passed Apr 17 02:41:42.447445 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 02:41:42.444714 ignition[874]: Ignition finished successfully Apr 17 02:41:42.451790 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 02:41:42.457552 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 02:41:42.463902 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 02:41:42.469685 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 02:41:42.473769 systemd[1]: Reached target basic.target - Basic System. Apr 17 02:41:42.475264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 02:41:42.588846 systemd-fsck[884]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 17 02:41:42.597534 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 02:41:42.605793 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 02:41:42.804404 kernel: EXT4-fs (vda9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 17 02:41:42.805907 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 02:41:42.810158 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 02:41:42.826529 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 02:41:42.835921 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 02:41:42.836301 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 02:41:42.836338 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 02:41:42.836356 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 02:41:42.864432 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 02:41:42.869495 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 02:41:42.877569 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Apr 17 02:41:42.877594 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:41:42.877607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 02:41:42.887398 kernel: BTRFS info (device vda6): turning on async discard Apr 17 02:41:42.888305 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 02:41:42.893892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 02:41:43.072221 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 02:41:43.077462 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Apr 17 02:41:43.084127 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 02:41:43.105609 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 02:41:43.433413 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 02:41:43.436754 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 02:41:43.438711 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 02:41:43.457697 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 02:41:43.461490 kernel: BTRFS info (device vda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:41:43.476387 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 02:41:43.666816 systemd-networkd[857]: eth0: Gained IPv6LL Apr 17 02:41:43.693685 ignition[1006]: INFO : Ignition 2.22.0 Apr 17 02:41:43.693685 ignition[1006]: INFO : Stage: mount Apr 17 02:41:43.697014 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 02:41:43.697014 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:41:43.702978 ignition[1006]: INFO : mount: mount passed Apr 17 02:41:43.702978 ignition[1006]: INFO : Ignition finished successfully Apr 17 02:41:43.712886 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 02:41:43.720148 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 02:41:43.815328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 02:41:43.847975 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Apr 17 02:41:43.848023 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:41:43.848032 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 02:41:43.856550 kernel: BTRFS info (device vda6): turning on async discard Apr 17 02:41:43.856689 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 02:41:43.859152 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 02:41:43.979293 ignition[1035]: INFO : Ignition 2.22.0 Apr 17 02:41:43.979293 ignition[1035]: INFO : Stage: files Apr 17 02:41:43.982482 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 02:41:43.982482 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:41:43.982482 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Apr 17 02:41:43.982482 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 02:41:43.982482 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 02:41:43.994286 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 02:41:43.994286 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 02:41:43.994286 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 02:41:43.994286 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 02:41:43.994286 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 02:41:43.984239 unknown[1035]: wrote ssh authorized keys file for user: core Apr 17 02:41:44.078441 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 02:41:44.439915 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 02:41:44.444858 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 02:41:44.444858 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 17 02:41:44.777402 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 02:41:45.396680 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 02:41:45.400712 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 02:41:45.400712 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 02:41:45.400712 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 02:41:45.400712 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 02:41:45.400712 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 02:41:45.400712 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 02:41:45.400712 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 02:41:45.432856 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 02:41:45.432856 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 02:41:45.432856 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 02:41:45.432856 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 02:41:45.432856 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 02:41:45.432856 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 02:41:45.432856 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 17 02:41:45.742917 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 02:41:47.572603 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 02:41:47.572603 ignition[1035]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 17 02:41:47.579599 ignition[1035]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 02:41:47.579599 ignition[1035]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 02:41:47.579599 ignition[1035]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 17 02:41:47.579599 ignition[1035]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 17 02:41:47.579599 ignition[1035]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 02:41:47.579599 ignition[1035]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 02:41:47.579599 ignition[1035]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 17 02:41:47.579599 ignition[1035]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 02:41:47.646946 ignition[1035]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 02:41:47.657156 ignition[1035]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 02:41:47.660453 ignition[1035]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 02:41:47.660453 ignition[1035]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 17 02:41:47.660453 ignition[1035]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 02:41:47.660453 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 02:41:47.660453 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 02:41:47.660453 ignition[1035]: INFO : files: files passed Apr 17 02:41:47.660453 ignition[1035]: INFO : Ignition finished successfully Apr 17 02:41:47.665006 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 02:41:47.676583 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 02:41:47.682528 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 02:41:47.699819 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 02:41:47.699965 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 02:41:47.711137 initrd-setup-root-after-ignition[1065]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 02:41:47.720101 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 02:41:47.723176 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 02:41:47.723176 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 02:41:47.787086 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 02:41:47.793937 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 02:41:47.795028 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 02:41:48.068573 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 02:41:48.068728 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 02:41:48.075727 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 02:41:48.082806 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 02:41:48.086440 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 02:41:48.088738 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 02:41:48.141999 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 02:41:48.166246 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 02:41:48.219525 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 02:41:48.225849 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 02:41:48.231011 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 02:41:48.285664 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 02:41:48.287332 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 02:41:48.294576 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 02:41:48.296605 systemd[1]: Stopped target basic.target - Basic System. Apr 17 02:41:48.303794 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 02:41:48.311834 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 02:41:48.312159 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 02:41:48.316874 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 17 02:41:48.324507 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 02:41:48.327147 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 02:41:48.332053 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 02:41:48.339236 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 02:41:48.339454 systemd[1]: Stopped target swap.target - Swaps. Apr 17 02:41:48.346248 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 02:41:48.346389 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 02:41:48.353613 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 02:41:48.355769 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 02:41:48.357589 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 02:41:48.357782 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 02:41:48.365074 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 02:41:48.365255 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 02:41:48.373052 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 02:41:48.373323 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 02:41:48.377849 systemd[1]: Stopped target paths.target - Path Units. Apr 17 02:41:48.381611 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 02:41:48.385834 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 02:41:48.391393 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 02:41:48.394550 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 02:41:48.397741 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 02:41:48.397927 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 02:41:48.400638 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 02:41:48.400758 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 02:41:48.401040 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 02:41:48.401232 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 02:41:48.406573 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 02:41:48.408642 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 02:41:48.416659 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 02:41:48.444730 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 02:41:48.449143 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 02:41:48.497904 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 02:41:48.502408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 02:41:48.502637 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 02:41:48.513480 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 02:41:48.513663 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 02:41:48.528767 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 02:41:48.528890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 02:41:48.561980 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 02:41:48.581236 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 02:41:48.581386 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 02:41:48.629003 ignition[1092]: INFO : Ignition 2.22.0 Apr 17 02:41:48.629003 ignition[1092]: INFO : Stage: umount Apr 17 02:41:48.633364 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 02:41:48.633364 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:41:48.633364 ignition[1092]: INFO : umount: umount passed Apr 17 02:41:48.633364 ignition[1092]: INFO : Ignition finished successfully Apr 17 02:41:48.635362 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 02:41:48.635649 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 02:41:48.642311 systemd[1]: Stopped target network.target - Network. Apr 17 02:41:48.644509 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 02:41:48.644582 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 02:41:48.648926 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 02:41:48.649013 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 02:41:48.650260 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 02:41:48.650310 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 02:41:48.657144 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 02:41:48.657258 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 02:41:48.660724 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 02:41:48.660815 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 02:41:48.662276 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 02:41:48.665472 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 02:41:48.682535 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 02:41:48.682729 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 02:41:48.688609 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 17 02:41:48.688791 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 02:41:48.688881 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 02:41:48.696408 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 17 02:41:48.697709 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 17 02:41:48.704656 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 02:41:48.705284 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 02:41:48.716580 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 02:41:48.718956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 02:41:48.719127 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 02:41:48.728332 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 02:41:48.728607 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 02:41:48.787769 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 02:41:48.787890 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 02:41:48.793169 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 02:41:48.793350 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 02:41:48.803117 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 02:41:48.813354 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 17 02:41:48.813471 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 17 02:41:48.825049 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 02:41:48.825395 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 02:41:48.830573 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 02:41:48.832164 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 02:41:48.839289 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 02:41:48.839374 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 02:41:48.845990 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 02:41:48.846058 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 02:41:48.855172 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 02:41:48.855329 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 02:41:48.863137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 02:41:48.863272 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 02:41:48.872177 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 02:41:48.872333 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 17 02:41:48.872387 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 02:41:48.885511 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 02:41:48.885607 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 02:41:48.892774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 02:41:48.892837 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:41:48.905284 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 17 02:41:48.905372 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 17 02:41:48.905412 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 02:41:48.905781 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 02:41:48.905880 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 02:41:48.924440 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 02:41:48.925091 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 02:41:48.930649 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 02:41:48.937620 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 02:41:48.962965 systemd[1]: Switching root. Apr 17 02:41:49.011228 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Apr 17 02:41:49.011356 systemd-journald[201]: Journal stopped Apr 17 02:41:51.123494 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 02:41:51.123548 kernel: SELinux: policy capability open_perms=1 Apr 17 02:41:51.123557 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 02:41:51.123564 kernel: SELinux: policy capability always_check_network=0 Apr 17 02:41:51.123576 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 02:41:51.123584 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 02:41:51.123593 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 02:41:51.123601 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 02:41:51.123609 kernel: SELinux: policy capability userspace_initial_context=0 Apr 17 02:41:51.123616 kernel: audit: type=1403 audit(1776393709.312:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 02:41:51.123627 systemd[1]: Successfully loaded SELinux policy in 94.110ms. Apr 17 02:41:51.123644 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.626ms. Apr 17 02:41:51.123653 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 02:41:51.123663 systemd[1]: Detected virtualization kvm. Apr 17 02:41:51.123692 systemd[1]: Detected architecture x86-64. Apr 17 02:41:51.123700 systemd[1]: Detected first boot. Apr 17 02:41:51.123708 systemd[1]: Initializing machine ID from VM UUID. Apr 17 02:41:51.123717 zram_generator::config[1138]: No configuration found. Apr 17 02:41:51.123726 kernel: Guest personality initialized and is inactive Apr 17 02:41:51.123733 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 17 02:41:51.123764 kernel: Initialized host personality Apr 17 02:41:51.123772 kernel: NET: Registered PF_VSOCK protocol family Apr 17 02:41:51.123781 systemd[1]: Populated /etc with preset unit settings. Apr 17 02:41:51.123790 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 17 02:41:51.123797 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 02:41:51.123805 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 02:41:51.123813 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 02:41:51.123821 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 02:41:51.123829 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 02:41:51.123837 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 02:41:51.123846 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 02:41:51.123853 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 02:41:51.123861 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 02:41:51.123869 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 02:41:51.123877 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 02:41:51.123884 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 02:41:51.123892 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 02:41:51.123900 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 02:41:51.123908 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 02:41:51.123916 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 02:41:51.123925 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 02:41:51.123949 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 02:41:51.123957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 02:41:51.123966 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 02:41:51.123977 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 02:41:51.123984 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 02:41:51.123992 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 02:41:51.124001 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 02:41:51.124009 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 02:41:51.124017 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 02:41:51.124024 systemd[1]: Reached target slices.target - Slice Units. Apr 17 02:41:51.124032 systemd[1]: Reached target swap.target - Swaps. Apr 17 02:41:51.124040 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 02:41:51.124048 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 02:41:51.124055 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 17 02:41:51.124063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 02:41:51.124072 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 02:41:51.124079 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 02:41:51.124087 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 02:41:51.124095 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 02:41:51.124103 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 02:41:51.124110 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 02:41:51.124133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:41:51.124140 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 02:41:51.124148 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 02:41:51.124158 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 02:41:51.124166 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 02:41:51.124173 systemd[1]: Reached target machines.target - Containers. Apr 17 02:41:51.124210 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 02:41:51.124218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 02:41:51.124226 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 02:41:51.124234 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 02:41:51.124241 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 02:41:51.124250 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 02:41:51.124258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 02:41:51.124266 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 02:41:51.124274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 02:41:51.124281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 02:41:51.124289 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 02:41:51.124296 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 02:41:51.124304 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 02:41:51.124313 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 02:41:51.124321 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 02:41:51.124344 kernel: fuse: init (API version 7.41) Apr 17 02:41:51.124352 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 02:41:51.124359 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 02:41:51.124366 kernel: ACPI: bus type drm_connector registered Apr 17 02:41:51.124373 kernel: loop: module loaded Apr 17 02:41:51.124380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 02:41:51.124388 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 02:41:51.124416 systemd-journald[1224]: Collecting audit messages is disabled. Apr 17 02:41:51.124442 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 17 02:41:51.124455 systemd-journald[1224]: Journal started Apr 17 02:41:51.124479 systemd-journald[1224]: Runtime Journal (/run/log/journal/edb58fd497c545c889807e41819ff1c4) is 6M, max 48.2M, 42.2M free. Apr 17 02:41:50.294129 systemd[1]: Queued start job for default target multi-user.target. Apr 17 02:41:50.320007 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 02:41:50.322040 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 02:41:51.137254 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 02:41:51.143354 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 02:41:51.143514 systemd[1]: Stopped verity-setup.service. Apr 17 02:41:51.149450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:41:51.162606 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 02:41:51.164751 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 02:41:51.166934 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 02:41:51.168933 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 02:41:51.171851 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 02:41:51.174221 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 02:41:51.191405 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 02:41:51.471837 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 02:41:51.475663 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 02:41:51.481088 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 02:41:51.481420 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 02:41:51.484418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 02:41:51.484710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 02:41:51.488365 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 02:41:51.488723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 02:41:51.492409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 02:41:51.492739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 02:41:51.495692 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 02:41:51.495928 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 02:41:51.498467 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 02:41:51.498702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 02:41:51.503706 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 02:41:51.513517 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 02:41:51.526122 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 02:41:51.531753 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 17 02:41:51.547328 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 02:41:51.552760 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 02:41:51.556990 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 02:41:51.560432 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 02:41:51.560517 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 02:41:51.564463 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 17 02:41:51.572357 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 02:41:51.576048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 02:41:51.580523 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 02:41:51.586857 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 02:41:51.590330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 02:41:51.595129 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 02:41:51.598136 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 02:41:51.599448 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 02:41:51.605459 systemd-journald[1224]: Time spent on flushing to /var/log/journal/edb58fd497c545c889807e41819ff1c4 is 56.456ms for 986 entries. Apr 17 02:41:51.605459 systemd-journald[1224]: System Journal (/var/log/journal/edb58fd497c545c889807e41819ff1c4) is 8M, max 195.6M, 187.6M free. Apr 17 02:41:51.688093 systemd-journald[1224]: Received client request to flush runtime journal. Apr 17 02:41:51.688164 kernel: loop0: detected capacity change from 0 to 128560 Apr 17 02:41:51.617574 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 02:41:51.625394 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 02:41:51.638252 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 02:41:51.643974 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 02:41:51.652549 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 02:41:51.692129 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 02:41:51.695892 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 02:41:51.704755 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 02:41:51.718358 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 17 02:41:51.793475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 02:41:51.799398 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 02:41:51.819859 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 02:41:51.823402 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 17 02:41:51.835290 kernel: loop1: detected capacity change from 0 to 110984 Apr 17 02:41:51.838549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 02:41:51.876269 kernel: loop2: detected capacity change from 0 to 217752 Apr 17 02:41:51.917983 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Apr 17 02:41:51.919008 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Apr 17 02:41:51.928236 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 02:41:51.943257 kernel: loop3: detected capacity change from 0 to 128560 Apr 17 02:41:52.105267 kernel: loop4: detected capacity change from 0 to 110984 Apr 17 02:41:52.125252 kernel: loop5: detected capacity change from 0 to 217752 Apr 17 02:41:52.146287 (sd-merge)[1282]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 02:41:52.146795 (sd-merge)[1282]: Merged extensions into '/usr'. Apr 17 02:41:52.160942 systemd[1]: Reload requested from client PID 1258 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 02:41:52.160967 systemd[1]: Reloading... Apr 17 02:41:52.347479 zram_generator::config[1304]: No configuration found. Apr 17 02:41:52.731294 ldconfig[1253]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 02:41:52.948796 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 02:41:52.948887 systemd[1]: Reloading finished in 787 ms. Apr 17 02:41:52.976432 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 02:41:52.978974 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 02:41:53.001271 systemd[1]: Starting ensure-sysext.service... Apr 17 02:41:53.015567 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 02:41:53.036234 systemd[1]: Reload requested from client PID 1346 ('systemctl') (unit ensure-sysext.service)... Apr 17 02:41:53.036263 systemd[1]: Reloading... Apr 17 02:41:53.047797 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 02:41:53.047829 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 02:41:53.048057 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 02:41:53.048332 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 02:41:53.048848 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 02:41:53.049100 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Apr 17 02:41:53.049135 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Apr 17 02:41:53.052617 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 02:41:53.052646 systemd-tmpfiles[1347]: Skipping /boot Apr 17 02:41:53.060329 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 02:41:53.060336 systemd-tmpfiles[1347]: Skipping /boot Apr 17 02:41:53.122323 zram_generator::config[1377]: No configuration found. Apr 17 02:41:53.529881 systemd[1]: Reloading finished in 493 ms. Apr 17 02:41:53.602780 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 02:41:53.619990 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 02:41:53.643460 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 02:41:53.649402 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 02:41:53.662800 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 02:41:53.667827 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 02:41:53.672082 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 02:41:53.676728 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 02:41:53.690880 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 02:41:53.712642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:41:53.713862 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 02:41:53.722359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 02:41:53.729036 systemd-udevd[1424]: Using default interface naming scheme 'v255'. Apr 17 02:41:53.735817 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 02:41:53.744351 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 02:41:53.750271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 02:41:53.750645 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 02:41:53.752401 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:41:53.760883 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 02:41:53.765407 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 02:41:53.766280 augenrules[1447]: No rules Apr 17 02:41:53.769946 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 02:41:53.772970 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 02:41:53.779510 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 02:41:53.784513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 02:41:53.784669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 02:41:53.787810 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 02:41:53.791044 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 02:41:53.791293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 02:41:53.795359 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 02:41:53.796823 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 02:41:53.819774 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 02:41:53.834602 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:41:53.834796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 02:41:53.840294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 02:41:53.846274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 02:41:53.852624 systemd-resolved[1418]: Positive Trust Anchors: Apr 17 02:41:53.852661 systemd-resolved[1418]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 02:41:53.852707 systemd-resolved[1418]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 02:41:53.856610 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 02:41:53.860316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 02:41:53.861571 systemd-resolved[1418]: Defaulting to hostname 'linux'. Apr 17 02:41:53.862534 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 02:41:53.866513 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 02:41:53.876483 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 02:41:53.879478 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 02:41:53.879667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:41:53.880606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 02:41:53.885368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 02:41:53.890096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 02:41:53.894133 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 02:41:53.895035 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 02:41:53.898278 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 02:41:53.898491 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 02:41:53.916781 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 02:41:53.926939 systemd[1]: Finished ensure-sysext.service. Apr 17 02:41:53.932316 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 02:41:53.945236 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 02:41:53.952772 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 02:41:53.956309 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 02:41:53.959827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:41:53.961095 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 02:41:53.962238 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 02:41:53.964479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 02:41:53.965637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 02:41:53.968827 kernel: ACPI: button: Power Button [PWRF] Apr 17 02:41:53.969869 systemd-networkd[1490]: lo: Link UP Apr 17 02:41:53.969892 systemd-networkd[1490]: lo: Gained carrier Apr 17 02:41:53.970844 systemd-networkd[1490]: Enumeration completed Apr 17 02:41:53.975280 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 02:41:53.975367 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 02:41:53.975370 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 02:41:53.976500 systemd-networkd[1490]: eth0: Link UP Apr 17 02:41:53.976588 systemd-networkd[1490]: eth0: Gained carrier Apr 17 02:41:53.976599 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 02:41:53.979248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 02:41:53.984794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 02:41:53.987911 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 02:41:53.991004 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 02:41:53.995291 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 02:41:53.996437 augenrules[1508]: /sbin/augenrules: No change Apr 17 02:41:53.996787 systemd-networkd[1490]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 02:41:54.004677 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 02:41:54.007681 augenrules[1532]: No rules Apr 17 02:41:54.010456 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 02:41:54.011314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:41:54.019356 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 02:41:54.019557 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 02:41:54.026860 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 02:41:54.031888 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 02:41:54.032103 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 02:41:54.035836 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 02:41:54.036028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 02:41:54.039913 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 02:41:54.040153 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 02:41:54.044064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 02:41:54.044316 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 02:41:54.048872 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 02:41:54.049086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 02:41:54.082045 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 02:41:54.094334 systemd[1]: Reached target network.target - Network. Apr 17 02:41:54.099972 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 17 02:41:54.107380 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 02:41:54.110910 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 02:41:54.111173 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 02:41:54.116569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:41:54.158706 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 17 02:41:54.316545 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 02:41:54.986081 systemd-resolved[1418]: Clock change detected. Flushing caches. Apr 17 02:41:54.991612 systemd-timesyncd[1534]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 02:41:54.995386 systemd-timesyncd[1534]: Initial clock synchronization to Fri 2026-04-17 02:41:54.985684 UTC. Apr 17 02:41:55.119151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:41:55.127969 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 02:41:55.131059 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 02:41:55.136423 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 02:41:55.140135 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 17 02:41:55.143019 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 02:41:55.146049 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 02:41:55.146123 systemd[1]: Reached target paths.target - Path Units. Apr 17 02:41:55.148206 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 02:41:55.152646 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 02:41:55.158163 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 02:41:55.162986 systemd[1]: Reached target timers.target - Timer Units. Apr 17 02:41:55.168883 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 02:41:55.174105 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 02:41:55.182737 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 17 02:41:55.188840 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 17 02:41:55.193809 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 17 02:41:55.215788 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 02:41:55.219726 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 17 02:41:55.224224 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 02:41:55.227283 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 02:41:55.233171 systemd[1]: Reached target basic.target - Basic System. Apr 17 02:41:55.236075 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 02:41:55.236133 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 02:41:55.237743 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 02:41:55.241683 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 02:41:55.244365 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 02:41:55.259878 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 02:41:55.264819 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 02:41:55.267109 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 02:41:55.267919 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 17 02:41:55.274148 jq[1573]: false Apr 17 02:41:55.274253 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 02:41:55.280751 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 02:41:55.286824 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 02:41:55.289773 oslogin_cache_refresh[1575]: Refreshing passwd entry cache Apr 17 02:41:55.290635 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Refreshing passwd entry cache Apr 17 02:41:55.291860 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 02:41:55.295342 extend-filesystems[1574]: Found /dev/vda6 Apr 17 02:41:55.300742 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 02:41:55.304303 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 02:41:55.305540 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 02:41:55.306379 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 02:41:55.307112 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Failure getting users, quitting Apr 17 02:41:55.307095 oslogin_cache_refresh[1575]: Failure getting users, quitting Apr 17 02:41:55.307285 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 02:41:55.307285 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Refreshing group entry cache Apr 17 02:41:55.307129 oslogin_cache_refresh[1575]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 02:41:55.307182 oslogin_cache_refresh[1575]: Refreshing group entry cache Apr 17 02:41:55.313255 extend-filesystems[1574]: Found /dev/vda9 Apr 17 02:41:55.312128 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 02:41:55.314551 oslogin_cache_refresh[1575]: Failure getting groups, quitting Apr 17 02:41:55.315861 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Failure getting groups, quitting Apr 17 02:41:55.315861 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 02:41:55.314564 oslogin_cache_refresh[1575]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 02:41:55.319720 extend-filesystems[1574]: Checking size of /dev/vda9 Apr 17 02:41:55.325076 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 02:41:55.328764 jq[1591]: true Apr 17 02:41:55.329993 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 02:41:55.330219 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 02:41:55.330434 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 17 02:41:55.330631 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 17 02:41:55.334670 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 02:41:55.334853 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 02:41:55.336758 extend-filesystems[1574]: Resized partition /dev/vda9 Apr 17 02:41:55.338567 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 02:41:55.338718 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 02:41:55.349004 extend-filesystems[1602]: resize2fs 1.47.3 (8-Jul-2025) Apr 17 02:41:55.360066 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 02:41:55.360261 update_engine[1588]: I20260417 02:41:55.360071 1588 main.cc:92] Flatcar Update Engine starting Apr 17 02:41:55.368509 (ntainerd)[1607]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 02:41:55.469723 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 02:41:55.473357 tar[1601]: linux-amd64/LICENSE Apr 17 02:41:55.490145 tar[1601]: linux-amd64/helm Apr 17 02:41:55.492330 jq[1605]: true Apr 17 02:41:55.495671 systemd-logind[1585]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 02:41:55.495686 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 02:41:55.496030 systemd-logind[1585]: New seat seat0. Apr 17 02:41:55.499075 extend-filesystems[1602]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 02:41:55.499075 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 02:41:55.499075 extend-filesystems[1602]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 02:41:55.497898 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 02:41:55.511775 extend-filesystems[1574]: Resized filesystem in /dev/vda9 Apr 17 02:41:55.501521 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 02:41:55.518420 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 02:41:55.562701 dbus-daemon[1571]: [system] SELinux support is enabled Apr 17 02:41:55.563074 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 02:41:55.570776 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Apr 17 02:41:55.572591 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 02:41:55.580769 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 02:41:55.580882 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 02:41:55.581150 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 02:41:55.587923 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 02:41:55.589350 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 02:41:55.596145 dbus-daemon[1571]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 02:41:55.601916 update_engine[1588]: I20260417 02:41:55.601461 1588 update_check_scheduler.cc:74] Next update check in 2m35s Apr 17 02:41:55.601846 systemd[1]: Started update-engine.service - Update Engine. Apr 17 02:41:55.609017 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 02:41:55.669751 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 02:41:55.695139 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 02:41:55.710001 containerd[1607]: time="2026-04-17T02:41:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 17 02:41:55.710715 containerd[1607]: time="2026-04-17T02:41:55.710661228Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 17 02:41:55.725038 containerd[1607]: time="2026-04-17T02:41:55.724905929Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.917µs" Apr 17 02:41:55.725038 containerd[1607]: time="2026-04-17T02:41:55.724976493Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725358127Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725575548Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725589451Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725608129Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725643891Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725652090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725838490Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725850043Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725857799Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725863460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727014 containerd[1607]: time="2026-04-17T02:41:55.725914411Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727581 containerd[1607]: time="2026-04-17T02:41:55.727547294Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727680 containerd[1607]: time="2026-04-17T02:41:55.727628870Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 02:41:55.727680 containerd[1607]: time="2026-04-17T02:41:55.727665968Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 17 02:41:55.727711 containerd[1607]: time="2026-04-17T02:41:55.727695435Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 17 02:41:55.728056 containerd[1607]: time="2026-04-17T02:41:55.728015309Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 17 02:41:55.728113 containerd[1607]: time="2026-04-17T02:41:55.728090346Z" level=info msg="metadata content store policy set" policy=shared Apr 17 02:41:55.734521 containerd[1607]: time="2026-04-17T02:41:55.734384138Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734617905Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734643690Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734671522Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734681185Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734688677Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734704360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734715218Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734732404Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734755895Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734768461Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 17 02:41:55.734901 containerd[1607]: time="2026-04-17T02:41:55.734833443Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735613960Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735642397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735660880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735676846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735685724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735693986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735702966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735711034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735752711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735761261Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 17 02:41:55.735774 containerd[1607]: time="2026-04-17T02:41:55.735768089Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 17 02:41:55.735918 containerd[1607]: time="2026-04-17T02:41:55.735813062Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 17 02:41:55.735918 containerd[1607]: time="2026-04-17T02:41:55.735835716Z" level=info msg="Start snapshots syncer" Apr 17 02:41:55.735918 containerd[1607]: time="2026-04-17T02:41:55.735883419Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 17 02:41:55.736298 containerd[1607]: time="2026-04-17T02:41:55.736228885Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 17 02:41:55.736505 containerd[1607]: time="2026-04-17T02:41:55.736319361Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 17 02:41:55.737515 containerd[1607]: time="2026-04-17T02:41:55.737488503Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 17 02:41:55.737853 containerd[1607]: time="2026-04-17T02:41:55.737784794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 17 02:41:55.737874 containerd[1607]: time="2026-04-17T02:41:55.737851160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 17 02:41:55.737874 containerd[1607]: time="2026-04-17T02:41:55.737861712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 17 02:41:55.737874 containerd[1607]: time="2026-04-17T02:41:55.737869871Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 17 02:41:55.737910 containerd[1607]: time="2026-04-17T02:41:55.737881177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 17 02:41:55.737910 containerd[1607]: time="2026-04-17T02:41:55.737889254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 17 02:41:55.737910 containerd[1607]: time="2026-04-17T02:41:55.737898220Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 17 02:41:55.737983 containerd[1607]: time="2026-04-17T02:41:55.737917785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 17 02:41:55.737983 containerd[1607]: time="2026-04-17T02:41:55.737973038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 17 02:41:55.738009 containerd[1607]: time="2026-04-17T02:41:55.737983844Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 17 02:41:55.738068 containerd[1607]: time="2026-04-17T02:41:55.738030245Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 02:41:55.738068 containerd[1607]: time="2026-04-17T02:41:55.738045094Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 02:41:55.738068 containerd[1607]: time="2026-04-17T02:41:55.738052278Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 02:41:55.738068 containerd[1607]: time="2026-04-17T02:41:55.738059872Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 02:41:55.738173 containerd[1607]: time="2026-04-17T02:41:55.738071927Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 17 02:41:55.738173 containerd[1607]: time="2026-04-17T02:41:55.738082649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 17 02:41:55.738173 containerd[1607]: time="2026-04-17T02:41:55.738096556Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 17 02:41:55.738173 containerd[1607]: time="2026-04-17T02:41:55.738122950Z" level=info msg="runtime interface created" Apr 17 02:41:55.738173 containerd[1607]: time="2026-04-17T02:41:55.738128069Z" level=info msg="created NRI interface" Apr 17 02:41:55.738173 containerd[1607]: time="2026-04-17T02:41:55.738134576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 17 02:41:55.738173 containerd[1607]: time="2026-04-17T02:41:55.738143932Z" level=info msg="Connect containerd service" Apr 17 02:41:55.738173 containerd[1607]: time="2026-04-17T02:41:55.738160007Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 02:41:55.738989 containerd[1607]: time="2026-04-17T02:41:55.738906937Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 02:41:55.740495 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 02:41:55.748593 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 02:41:55.786653 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 02:41:55.786900 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 02:41:55.795325 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 02:41:55.833671 containerd[1607]: time="2026-04-17T02:41:55.833574333Z" level=info msg="Start subscribing containerd event" Apr 17 02:41:55.834003 containerd[1607]: time="2026-04-17T02:41:55.833805781Z" level=info msg="Start recovering state" Apr 17 02:41:55.834060 containerd[1607]: time="2026-04-17T02:41:55.834022304Z" level=info msg="Start event monitor" Apr 17 02:41:55.834164 containerd[1607]: time="2026-04-17T02:41:55.834073333Z" level=info msg="Start cni network conf syncer for default" Apr 17 02:41:55.834245 containerd[1607]: time="2026-04-17T02:41:55.834219758Z" level=info msg="Start streaming server" Apr 17 02:41:55.834506 containerd[1607]: time="2026-04-17T02:41:55.834248724Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 17 02:41:55.834506 containerd[1607]: time="2026-04-17T02:41:55.834493265Z" level=info msg="runtime interface starting up..." Apr 17 02:41:55.834506 containerd[1607]: time="2026-04-17T02:41:55.834500737Z" level=info msg="starting plugins..." Apr 17 02:41:55.834562 containerd[1607]: time="2026-04-17T02:41:55.834515328Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 17 02:41:55.834577 containerd[1607]: time="2026-04-17T02:41:55.834229080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 02:41:55.834637 containerd[1607]: time="2026-04-17T02:41:55.834612653Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 02:41:55.835262 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 02:41:55.835478 containerd[1607]: time="2026-04-17T02:41:55.835434247Z" level=info msg="containerd successfully booted in 0.125960s" Apr 17 02:41:55.841531 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 02:41:55.851844 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 02:41:55.859823 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 02:41:55.862384 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 02:41:55.999220 tar[1601]: linux-amd64/README.md Apr 17 02:41:56.031435 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 02:41:56.295356 systemd-networkd[1490]: eth0: Gained IPv6LL Apr 17 02:41:56.301542 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 02:41:56.306620 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 02:41:56.314670 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 02:41:56.320810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:41:56.329342 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 02:41:56.362992 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 02:41:56.363246 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 02:41:56.366039 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 02:41:56.373382 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 02:41:57.313493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:41:57.317418 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 02:41:57.320739 systemd[1]: Startup finished in 4.150s (kernel) + 12.608s (initrd) + 7.433s (userspace) = 24.191s. Apr 17 02:41:57.336752 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 02:41:58.042007 kubelet[1706]: E0417 02:41:58.041731 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 02:41:58.049594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 02:41:58.049800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 02:41:58.050716 systemd[1]: kubelet.service: Consumed 1.121s CPU time, 258.1M memory peak. Apr 17 02:42:05.148472 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 02:42:05.149724 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:52080.service - OpenSSH per-connection server daemon (10.0.0.1:52080). Apr 17 02:42:05.271177 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 52080 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:42:05.276290 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:05.284022 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 02:42:05.284784 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 02:42:05.289787 systemd-logind[1585]: New session 1 of user core. Apr 17 02:42:05.312019 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 02:42:05.314321 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 02:42:05.332101 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 02:42:05.334529 systemd-logind[1585]: New session c1 of user core. Apr 17 02:42:05.485386 systemd[1725]: Queued start job for default target default.target. Apr 17 02:42:05.503054 systemd[1725]: Created slice app.slice - User Application Slice. Apr 17 02:42:05.503097 systemd[1725]: Reached target paths.target - Paths. Apr 17 02:42:05.503149 systemd[1725]: Reached target timers.target - Timers. Apr 17 02:42:05.505295 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 02:42:05.517615 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 02:42:05.517754 systemd[1725]: Reached target sockets.target - Sockets. Apr 17 02:42:05.517808 systemd[1725]: Reached target basic.target - Basic System. Apr 17 02:42:05.517832 systemd[1725]: Reached target default.target - Main User Target. Apr 17 02:42:05.517851 systemd[1725]: Startup finished in 177ms. Apr 17 02:42:05.517983 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 02:42:05.520093 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 02:42:05.533295 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:52086.service - OpenSSH per-connection server daemon (10.0.0.1:52086). Apr 17 02:42:05.614813 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 52086 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:42:05.616092 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:05.620561 systemd-logind[1585]: New session 2 of user core. Apr 17 02:42:05.630291 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 02:42:05.646216 sshd[1739]: Connection closed by 10.0.0.1 port 52086 Apr 17 02:42:05.646548 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:05.660006 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:52086.service: Deactivated successfully. Apr 17 02:42:05.661321 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 02:42:05.662008 systemd-logind[1585]: Session 2 logged out. Waiting for processes to exit. Apr 17 02:42:05.668433 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:52102.service - OpenSSH per-connection server daemon (10.0.0.1:52102). Apr 17 02:42:05.671350 systemd-logind[1585]: Removed session 2. Apr 17 02:42:05.740551 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 52102 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:42:05.741600 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:05.745805 systemd-logind[1585]: New session 3 of user core. Apr 17 02:42:05.755145 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 02:42:05.761432 sshd[1748]: Connection closed by 10.0.0.1 port 52102 Apr 17 02:42:05.761765 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:05.782815 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:52102.service: Deactivated successfully. Apr 17 02:42:05.790599 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 02:42:05.791567 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Apr 17 02:42:05.795687 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:52104.service - OpenSSH per-connection server daemon (10.0.0.1:52104). Apr 17 02:42:05.797054 systemd-logind[1585]: Removed session 3. Apr 17 02:42:05.903370 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 52104 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:42:05.904691 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:05.909016 systemd-logind[1585]: New session 4 of user core. Apr 17 02:42:05.919133 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 02:42:05.937125 sshd[1757]: Connection closed by 10.0.0.1 port 52104 Apr 17 02:42:05.937877 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:05.949419 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:52104.service: Deactivated successfully. Apr 17 02:42:05.950731 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 02:42:05.951552 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Apr 17 02:42:05.953828 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:52110.service - OpenSSH per-connection server daemon (10.0.0.1:52110). Apr 17 02:42:05.954643 systemd-logind[1585]: Removed session 4. Apr 17 02:42:06.029090 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 52110 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:42:06.030260 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:06.034643 systemd-logind[1585]: New session 5 of user core. Apr 17 02:42:06.043157 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 02:42:06.062554 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 02:42:06.062752 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 02:42:06.089494 sudo[1767]: pam_unix(sudo:session): session closed for user root Apr 17 02:42:06.091610 sshd[1766]: Connection closed by 10.0.0.1 port 52110 Apr 17 02:42:06.092426 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:06.106340 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:52110.service: Deactivated successfully. Apr 17 02:42:06.107661 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 02:42:06.108427 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Apr 17 02:42:06.111573 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:52126.service - OpenSSH per-connection server daemon (10.0.0.1:52126). Apr 17 02:42:06.112808 systemd-logind[1585]: Removed session 5. Apr 17 02:42:06.213090 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 52126 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:42:06.215709 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:06.230472 systemd-logind[1585]: New session 6 of user core. Apr 17 02:42:06.239405 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 02:42:06.253493 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 02:42:06.253735 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 02:42:06.260719 sudo[1778]: pam_unix(sudo:session): session closed for user root Apr 17 02:42:06.266291 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 17 02:42:06.266592 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 02:42:06.292046 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 02:42:06.359752 augenrules[1800]: No rules Apr 17 02:42:06.360371 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 02:42:06.360609 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 02:42:06.361393 sudo[1777]: pam_unix(sudo:session): session closed for user root Apr 17 02:42:06.362670 sshd[1776]: Connection closed by 10.0.0.1 port 52126 Apr 17 02:42:06.364119 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:06.380035 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:52126.service: Deactivated successfully. Apr 17 02:42:06.386203 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 02:42:06.389714 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Apr 17 02:42:06.390898 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:52128.service - OpenSSH per-connection server daemon (10.0.0.1:52128). Apr 17 02:42:06.392857 systemd-logind[1585]: Removed session 6. Apr 17 02:42:06.523330 sshd[1809]: Accepted publickey for core from 10.0.0.1 port 52128 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:42:06.526776 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:06.533415 systemd-logind[1585]: New session 7 of user core. Apr 17 02:42:06.544126 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 02:42:06.555692 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 02:42:06.555890 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 02:42:06.954693 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 02:42:06.976598 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 02:42:07.252459 dockerd[1833]: time="2026-04-17T02:42:07.251683266Z" level=info msg="Starting up" Apr 17 02:42:07.253901 dockerd[1833]: time="2026-04-17T02:42:07.253413237Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 17 02:42:07.282732 dockerd[1833]: time="2026-04-17T02:42:07.282266860Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 17 02:42:07.342585 dockerd[1833]: time="2026-04-17T02:42:07.341858033Z" level=info msg="Loading containers: start." Apr 17 02:42:07.355977 kernel: Initializing XFRM netlink socket Apr 17 02:42:07.837485 systemd-networkd[1490]: docker0: Link UP Apr 17 02:42:07.844769 dockerd[1833]: time="2026-04-17T02:42:07.844586373Z" level=info msg="Loading containers: done." Apr 17 02:42:07.863330 dockerd[1833]: time="2026-04-17T02:42:07.863212059Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 02:42:07.863330 dockerd[1833]: time="2026-04-17T02:42:07.863313233Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 17 02:42:07.863330 dockerd[1833]: time="2026-04-17T02:42:07.863380170Z" level=info msg="Initializing buildkit" Apr 17 02:42:07.921982 dockerd[1833]: time="2026-04-17T02:42:07.921761094Z" level=info msg="Completed buildkit initialization" Apr 17 02:42:07.930989 dockerd[1833]: time="2026-04-17T02:42:07.930778771Z" level=info msg="Daemon has completed initialization" Apr 17 02:42:07.930989 dockerd[1833]: time="2026-04-17T02:42:07.930988171Z" level=info msg="API listen on /run/docker.sock" Apr 17 02:42:07.931687 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 02:42:08.173139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 02:42:08.177674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:42:08.355583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:42:08.368183 (kubelet)[2061]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 02:42:08.490972 kubelet[2061]: E0417 02:42:08.490277 2061 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 02:42:08.494671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 02:42:08.494799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 02:42:08.495204 systemd[1]: kubelet.service: Consumed 245ms CPU time, 110.3M memory peak. Apr 17 02:42:08.519862 containerd[1607]: time="2026-04-17T02:42:08.519746677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 17 02:42:09.235800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount241511700.mount: Deactivated successfully. Apr 17 02:42:10.371311 containerd[1607]: time="2026-04-17T02:42:10.371090749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:10.375096 containerd[1607]: time="2026-04-17T02:42:10.373480679Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 17 02:42:10.376573 containerd[1607]: time="2026-04-17T02:42:10.376228861Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:10.385502 containerd[1607]: time="2026-04-17T02:42:10.385349662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:10.386441 containerd[1607]: time="2026-04-17T02:42:10.386385079Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.866525781s" Apr 17 02:42:10.386477 containerd[1607]: time="2026-04-17T02:42:10.386441995Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 17 02:42:10.387294 containerd[1607]: time="2026-04-17T02:42:10.387226551Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 17 02:42:11.545464 containerd[1607]: time="2026-04-17T02:42:11.545182768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:11.547114 containerd[1607]: time="2026-04-17T02:42:11.545858960Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 17 02:42:11.547168 containerd[1607]: time="2026-04-17T02:42:11.547139130Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:11.559116 containerd[1607]: time="2026-04-17T02:42:11.558873262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:11.564493 containerd[1607]: time="2026-04-17T02:42:11.564224080Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.176649027s" Apr 17 02:42:11.564493 containerd[1607]: time="2026-04-17T02:42:11.564438888Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 17 02:42:11.565265 containerd[1607]: time="2026-04-17T02:42:11.565245362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 17 02:42:12.531585 containerd[1607]: time="2026-04-17T02:42:12.531344468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:12.532293 containerd[1607]: time="2026-04-17T02:42:12.532065927Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 17 02:42:12.534705 containerd[1607]: time="2026-04-17T02:42:12.534501889Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:12.537868 containerd[1607]: time="2026-04-17T02:42:12.537736131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:12.539441 containerd[1607]: time="2026-04-17T02:42:12.539275346Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 973.977864ms" Apr 17 02:42:12.539441 containerd[1607]: time="2026-04-17T02:42:12.539388093Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 17 02:42:12.540698 containerd[1607]: time="2026-04-17T02:42:12.540650976Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 17 02:42:13.872478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1346656128.mount: Deactivated successfully. Apr 17 02:42:14.192834 containerd[1607]: time="2026-04-17T02:42:14.192208746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:14.195438 containerd[1607]: time="2026-04-17T02:42:14.194036818Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 17 02:42:14.195761 containerd[1607]: time="2026-04-17T02:42:14.195699982Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:14.203266 containerd[1607]: time="2026-04-17T02:42:14.202999338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:14.204082 containerd[1607]: time="2026-04-17T02:42:14.204021610Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.663325643s" Apr 17 02:42:14.204082 containerd[1607]: time="2026-04-17T02:42:14.204060692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 17 02:42:14.207427 containerd[1607]: time="2026-04-17T02:42:14.206917355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 17 02:42:14.729390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088100395.mount: Deactivated successfully. Apr 17 02:42:17.039381 containerd[1607]: time="2026-04-17T02:42:17.038810018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:17.042865 containerd[1607]: time="2026-04-17T02:42:17.040983814Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 17 02:42:17.042865 containerd[1607]: time="2026-04-17T02:42:17.042846772Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:17.053208 containerd[1607]: time="2026-04-17T02:42:17.053001413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:17.054223 containerd[1607]: time="2026-04-17T02:42:17.054179056Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.846783194s" Apr 17 02:42:17.054223 containerd[1607]: time="2026-04-17T02:42:17.054221118Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 17 02:42:17.055112 containerd[1607]: time="2026-04-17T02:42:17.055093976Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 02:42:17.801805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481155793.mount: Deactivated successfully. Apr 17 02:42:17.812057 containerd[1607]: time="2026-04-17T02:42:17.811805952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:17.813653 containerd[1607]: time="2026-04-17T02:42:17.813357638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 17 02:42:17.815979 containerd[1607]: time="2026-04-17T02:42:17.815707657Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:17.817910 containerd[1607]: time="2026-04-17T02:42:17.817847156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:17.818355 containerd[1607]: time="2026-04-17T02:42:17.818315569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 762.985675ms" Apr 17 02:42:17.818430 containerd[1607]: time="2026-04-17T02:42:17.818364746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 02:42:17.820241 containerd[1607]: time="2026-04-17T02:42:17.819823481Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 17 02:42:18.510314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221352923.mount: Deactivated successfully. Apr 17 02:42:18.511914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 02:42:18.515042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:42:18.929691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:42:18.945498 (kubelet)[2227]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 02:42:19.034088 kubelet[2227]: E0417 02:42:19.033875 2227 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 02:42:19.039857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 02:42:19.040024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 02:42:19.040601 systemd[1]: kubelet.service: Consumed 360ms CPU time, 110.5M memory peak. Apr 17 02:42:20.166801 containerd[1607]: time="2026-04-17T02:42:20.166431110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:20.181123 containerd[1607]: time="2026-04-17T02:42:20.167241353Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 17 02:42:20.181123 containerd[1607]: time="2026-04-17T02:42:20.171297594Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:20.189007 containerd[1607]: time="2026-04-17T02:42:20.187490189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:20.191014 containerd[1607]: time="2026-04-17T02:42:20.190764908Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.370426507s" Apr 17 02:42:20.191014 containerd[1607]: time="2026-04-17T02:42:20.190881992Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 17 02:42:22.223767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:42:22.223975 systemd[1]: kubelet.service: Consumed 360ms CPU time, 110.5M memory peak. Apr 17 02:42:22.226015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:42:22.267194 systemd[1]: Reload requested from client PID 2315 ('systemctl') (unit session-7.scope)... Apr 17 02:42:22.267232 systemd[1]: Reloading... Apr 17 02:42:22.377317 zram_generator::config[2355]: No configuration found. Apr 17 02:42:22.743347 systemd[1]: Reloading finished in 475 ms. Apr 17 02:42:22.832345 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 02:42:22.832425 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 02:42:22.832690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:42:22.832724 systemd[1]: kubelet.service: Consumed 124ms CPU time, 98.1M memory peak. Apr 17 02:42:22.834169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:42:23.086786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:42:23.161106 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 02:42:23.236734 kubelet[2406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 02:42:23.514884 kubelet[2406]: I0417 02:42:23.514488 2406 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 02:42:23.516111 kubelet[2406]: I0417 02:42:23.515152 2406 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 02:42:23.516111 kubelet[2406]: I0417 02:42:23.515219 2406 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 02:42:23.516111 kubelet[2406]: I0417 02:42:23.515225 2406 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 02:42:23.516111 kubelet[2406]: I0417 02:42:23.515703 2406 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 02:42:23.565881 kubelet[2406]: E0417 02:42:23.565657 2406 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 02:42:23.567058 kubelet[2406]: I0417 02:42:23.566433 2406 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 02:42:23.570105 kubelet[2406]: I0417 02:42:23.570089 2406 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 02:42:23.575809 kubelet[2406]: I0417 02:42:23.575766 2406 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 02:42:23.576587 kubelet[2406]: I0417 02:42:23.576533 2406 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 02:42:23.576771 kubelet[2406]: I0417 02:42:23.576577 2406 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 02:42:23.576771 kubelet[2406]: I0417 02:42:23.576759 2406 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 02:42:23.576771 kubelet[2406]: I0417 02:42:23.576766 2406 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 02:42:23.577014 kubelet[2406]: I0417 02:42:23.576918 2406 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 02:42:23.603358 kubelet[2406]: I0417 02:42:23.603013 2406 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 02:42:23.606009 kubelet[2406]: I0417 02:42:23.603904 2406 kubelet.go:482] "Attempting to sync node with API server" Apr 17 02:42:23.606009 kubelet[2406]: I0417 02:42:23.603979 2406 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 02:42:23.606009 kubelet[2406]: I0417 02:42:23.604017 2406 kubelet.go:394] "Adding apiserver pod source" Apr 17 02:42:23.606009 kubelet[2406]: I0417 02:42:23.604027 2406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 02:42:23.609845 kubelet[2406]: I0417 02:42:23.609781 2406 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 02:42:23.612300 kubelet[2406]: I0417 02:42:23.612247 2406 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 02:42:23.612300 kubelet[2406]: I0417 02:42:23.612291 2406 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 02:42:23.612370 kubelet[2406]: W0417 02:42:23.612337 2406 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 02:42:23.620669 kubelet[2406]: I0417 02:42:23.620567 2406 server.go:1257] "Started kubelet" Apr 17 02:42:23.621997 kubelet[2406]: I0417 02:42:23.621010 2406 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 02:42:23.622045 kubelet[2406]: I0417 02:42:23.620920 2406 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 02:42:23.622061 kubelet[2406]: I0417 02:42:23.622049 2406 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 02:42:23.622189 kubelet[2406]: I0417 02:42:23.622175 2406 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 02:42:23.622401 kubelet[2406]: I0417 02:42:23.622348 2406 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 02:42:23.622401 kubelet[2406]: I0417 02:42:23.622354 2406 server.go:317] "Adding debug handlers to kubelet server" Apr 17 02:42:23.624262 kubelet[2406]: E0417 02:42:23.624216 2406 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 02:42:23.629416 kubelet[2406]: I0417 02:42:23.629313 2406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 02:42:23.629916 kubelet[2406]: I0417 02:42:23.629725 2406 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 02:42:23.629916 kubelet[2406]: I0417 02:42:23.629894 2406 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 02:42:23.630689 kubelet[2406]: I0417 02:42:23.630071 2406 reconciler.go:29] "Reconciler: start to sync state" Apr 17 02:42:23.636406 kubelet[2406]: E0417 02:42:23.636252 2406 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Apr 17 02:42:23.638098 kubelet[2406]: E0417 02:42:23.624235 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a704b2b5b999ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 02:42:23.620479487 +0000 UTC m=+0.455615409,LastTimestamp:2026-04-17 02:42:23.620479487 +0000 UTC m=+0.455615409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 02:42:23.639648 kubelet[2406]: I0417 02:42:23.639474 2406 factory.go:223] Registration of the systemd container factory successfully Apr 17 02:42:23.639648 kubelet[2406]: E0417 02:42:23.639497 2406 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 02:42:23.639961 kubelet[2406]: I0417 02:42:23.639841 2406 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 02:42:23.641617 kubelet[2406]: I0417 02:42:23.641571 2406 factory.go:223] Registration of the containerd container factory successfully Apr 17 02:42:23.652128 kubelet[2406]: I0417 02:42:23.652089 2406 cpu_manager.go:225] "Starting" policy="none" Apr 17 02:42:23.652128 kubelet[2406]: I0417 02:42:23.652112 2406 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 02:42:23.652128 kubelet[2406]: I0417 02:42:23.652124 2406 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 02:42:23.657292 kubelet[2406]: I0417 02:42:23.657213 2406 policy_none.go:50] "Start" Apr 17 02:42:23.657292 kubelet[2406]: I0417 02:42:23.657284 2406 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 02:42:23.657292 kubelet[2406]: I0417 02:42:23.657308 2406 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 02:42:23.660452 kubelet[2406]: I0417 02:42:23.660210 2406 policy_none.go:44] "Start" Apr 17 02:42:23.670265 kubelet[2406]: I0417 02:42:23.670124 2406 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 02:42:23.679388 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 02:42:23.683062 kubelet[2406]: I0417 02:42:23.681674 2406 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 02:42:23.683062 kubelet[2406]: I0417 02:42:23.681696 2406 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 02:42:23.683062 kubelet[2406]: I0417 02:42:23.681728 2406 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 02:42:23.683062 kubelet[2406]: E0417 02:42:23.681779 2406 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 02:42:23.692658 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 02:42:23.712143 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 02:42:23.713837 kubelet[2406]: E0417 02:42:23.713795 2406 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 02:42:23.714110 kubelet[2406]: I0417 02:42:23.714059 2406 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 02:42:23.714110 kubelet[2406]: I0417 02:42:23.714091 2406 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 02:42:23.715062 kubelet[2406]: I0417 02:42:23.715025 2406 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 02:42:23.717101 kubelet[2406]: E0417 02:42:23.717066 2406 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 02:42:23.717152 kubelet[2406]: E0417 02:42:23.717144 2406 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 02:42:23.887079 kubelet[2406]: I0417 02:42:23.824511 2406 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 02:42:23.887079 kubelet[2406]: E0417 02:42:23.827045 2406 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 02:42:23.887079 kubelet[2406]: I0417 02:42:23.831124 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:23.887079 kubelet[2406]: I0417 02:42:23.831333 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:23.887079 kubelet[2406]: I0417 02:42:23.831349 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:23.887079 kubelet[2406]: I0417 02:42:23.831362 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:23.891555 kubelet[2406]: I0417 02:42:23.832484 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 17 02:42:23.891555 kubelet[2406]: I0417 02:42:23.832725 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:23.891555 kubelet[2406]: I0417 02:42:23.832741 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:23.891555 kubelet[2406]: I0417 02:42:23.832881 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:23.891555 kubelet[2406]: I0417 02:42:23.832895 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:23.894857 kubelet[2406]: E0417 02:42:23.841572 2406 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Apr 17 02:42:23.988382 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 17 02:42:23.999076 kubelet[2406]: E0417 02:42:23.999010 2406 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:42:24.001737 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 17 02:42:24.002770 kubelet[2406]: E0417 02:42:24.002720 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:24.004053 containerd[1607]: time="2026-04-17T02:42:24.004021947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 17 02:42:24.010019 kubelet[2406]: E0417 02:42:24.009989 2406 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:42:24.012863 kubelet[2406]: E0417 02:42:24.012807 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:24.013613 containerd[1607]: time="2026-04-17T02:42:24.013452500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 17 02:42:24.013816 systemd[1]: Created slice kubepods-burstable-podd42913f7f16752bf7aea774413726d2e.slice - libcontainer container kubepods-burstable-podd42913f7f16752bf7aea774413726d2e.slice. Apr 17 02:42:24.016733 kubelet[2406]: E0417 02:42:24.016386 2406 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:42:24.020424 kubelet[2406]: E0417 02:42:24.020306 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:24.021215 containerd[1607]: time="2026-04-17T02:42:24.021078620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d42913f7f16752bf7aea774413726d2e,Namespace:kube-system,Attempt:0,}" Apr 17 02:42:24.030100 kubelet[2406]: I0417 02:42:24.029905 2406 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 02:42:24.031357 kubelet[2406]: E0417 02:42:24.031320 2406 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 02:42:24.043330 kubelet[2406]: E0417 02:42:24.043225 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a704b2b5b999ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 02:42:23.620479487 +0000 UTC m=+0.455615409,LastTimestamp:2026-04-17 02:42:23.620479487 +0000 UTC m=+0.455615409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 02:42:24.243716 kubelet[2406]: E0417 02:42:24.243509 2406 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Apr 17 02:42:24.434175 kubelet[2406]: I0417 02:42:24.433492 2406 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 02:42:24.435880 kubelet[2406]: E0417 02:42:24.435516 2406 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 02:42:24.479191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805754068.mount: Deactivated successfully. Apr 17 02:42:24.485465 containerd[1607]: time="2026-04-17T02:42:24.485409115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 02:42:24.486773 containerd[1607]: time="2026-04-17T02:42:24.486569197Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 17 02:42:24.490919 containerd[1607]: time="2026-04-17T02:42:24.490692247Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 02:42:24.493258 containerd[1607]: time="2026-04-17T02:42:24.493201487Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 02:42:24.494987 containerd[1607]: time="2026-04-17T02:42:24.494791879Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 17 02:42:24.500874 containerd[1607]: time="2026-04-17T02:42:24.500578035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 02:42:24.502642 containerd[1607]: time="2026-04-17T02:42:24.502371383Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 494.724817ms" Apr 17 02:42:24.504164 containerd[1607]: time="2026-04-17T02:42:24.504128105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 17 02:42:24.504627 containerd[1607]: time="2026-04-17T02:42:24.504577095Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 02:42:24.508837 containerd[1607]: time="2026-04-17T02:42:24.508749474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.503697ms" Apr 17 02:42:24.509418 containerd[1607]: time="2026-04-17T02:42:24.509379122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 486.707969ms" Apr 17 02:42:24.798154 containerd[1607]: time="2026-04-17T02:42:24.798025810Z" level=info msg="connecting to shim dff8ff66f25f2b81d0ee59cfb7c6dd3151b521e38a581068a7d258412d1fb62e" address="unix:///run/containerd/s/390a43576983b0132aed362d0d39f20af0da3340c701c5f0813b925e1a5d93bd" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:42:24.810076 containerd[1607]: time="2026-04-17T02:42:24.807885471Z" level=info msg="connecting to shim 92801252f9235da16f99a8dffcda49a73ae475ee16453051ab7fa5b926408a57" address="unix:///run/containerd/s/60fd6eb7b920138d818de0cb75342e07f054f4a9b0bbec40b6c54026f27697ee" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:42:24.812869 containerd[1607]: time="2026-04-17T02:42:24.812690235Z" level=info msg="connecting to shim 38e1479a75e0a5a927edef581feb6fb9cb541bed192aa2e6959d1bde3690ef3f" address="unix:///run/containerd/s/4e3fb648ffe535a4460ad5e880bd868aa80cc2ddadd826a87149c6be58ea6a88" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:42:24.859787 systemd[1]: Started cri-containerd-92801252f9235da16f99a8dffcda49a73ae475ee16453051ab7fa5b926408a57.scope - libcontainer container 92801252f9235da16f99a8dffcda49a73ae475ee16453051ab7fa5b926408a57. Apr 17 02:42:24.867720 systemd[1]: Started cri-containerd-38e1479a75e0a5a927edef581feb6fb9cb541bed192aa2e6959d1bde3690ef3f.scope - libcontainer container 38e1479a75e0a5a927edef581feb6fb9cb541bed192aa2e6959d1bde3690ef3f. Apr 17 02:42:24.952813 systemd[1]: Started cri-containerd-dff8ff66f25f2b81d0ee59cfb7c6dd3151b521e38a581068a7d258412d1fb62e.scope - libcontainer container dff8ff66f25f2b81d0ee59cfb7c6dd3151b521e38a581068a7d258412d1fb62e. Apr 17 02:42:25.153378 kubelet[2406]: E0417 02:42:25.152423 2406 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Apr 17 02:42:25.221551 containerd[1607]: time="2026-04-17T02:42:25.221456490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"38e1479a75e0a5a927edef581feb6fb9cb541bed192aa2e6959d1bde3690ef3f\"" Apr 17 02:42:25.225543 kubelet[2406]: E0417 02:42:25.225297 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:25.238122 containerd[1607]: time="2026-04-17T02:42:25.237369291Z" level=info msg="CreateContainer within sandbox \"38e1479a75e0a5a927edef581feb6fb9cb541bed192aa2e6959d1bde3690ef3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 02:42:25.242794 kubelet[2406]: I0417 02:42:25.242749 2406 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 02:42:25.243665 kubelet[2406]: E0417 02:42:25.243492 2406 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 17 02:42:25.259558 containerd[1607]: time="2026-04-17T02:42:25.259445880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d42913f7f16752bf7aea774413726d2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"92801252f9235da16f99a8dffcda49a73ae475ee16453051ab7fa5b926408a57\"" Apr 17 02:42:25.265403 kubelet[2406]: E0417 02:42:25.265309 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:25.266727 containerd[1607]: time="2026-04-17T02:42:25.266676619Z" level=info msg="Container a9381e989684f958dbbf935bfc093a8f16c1f7cd3df37fe78cc039e94c8edbb6: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:42:25.281300 containerd[1607]: time="2026-04-17T02:42:25.281236472Z" level=info msg="CreateContainer within sandbox \"92801252f9235da16f99a8dffcda49a73ae475ee16453051ab7fa5b926408a57\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 02:42:25.289697 containerd[1607]: time="2026-04-17T02:42:25.288456392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"dff8ff66f25f2b81d0ee59cfb7c6dd3151b521e38a581068a7d258412d1fb62e\"" Apr 17 02:42:25.289697 containerd[1607]: time="2026-04-17T02:42:25.288990708Z" level=info msg="CreateContainer within sandbox \"38e1479a75e0a5a927edef581feb6fb9cb541bed192aa2e6959d1bde3690ef3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a9381e989684f958dbbf935bfc093a8f16c1f7cd3df37fe78cc039e94c8edbb6\"" Apr 17 02:42:25.289697 containerd[1607]: time="2026-04-17T02:42:25.289474661Z" level=info msg="StartContainer for \"a9381e989684f958dbbf935bfc093a8f16c1f7cd3df37fe78cc039e94c8edbb6\"" Apr 17 02:42:25.290843 kubelet[2406]: E0417 02:42:25.290625 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:25.291328 containerd[1607]: time="2026-04-17T02:42:25.291218980Z" level=info msg="connecting to shim a9381e989684f958dbbf935bfc093a8f16c1f7cd3df37fe78cc039e94c8edbb6" address="unix:///run/containerd/s/4e3fb648ffe535a4460ad5e880bd868aa80cc2ddadd826a87149c6be58ea6a88" protocol=ttrpc version=3 Apr 17 02:42:25.297564 containerd[1607]: time="2026-04-17T02:42:25.297541205Z" level=info msg="CreateContainer within sandbox \"dff8ff66f25f2b81d0ee59cfb7c6dd3151b521e38a581068a7d258412d1fb62e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 02:42:25.301270 containerd[1607]: time="2026-04-17T02:42:25.301251920Z" level=info msg="Container 601681eb4035b019617cefcf4f5c418af08a3db7ab3784e1ea16b90515b66b8c: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:42:25.318038 containerd[1607]: time="2026-04-17T02:42:25.318011951Z" level=info msg="CreateContainer within sandbox \"92801252f9235da16f99a8dffcda49a73ae475ee16453051ab7fa5b926408a57\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"601681eb4035b019617cefcf4f5c418af08a3db7ab3784e1ea16b90515b66b8c\"" Apr 17 02:42:25.319114 containerd[1607]: time="2026-04-17T02:42:25.319065126Z" level=info msg="StartContainer for \"601681eb4035b019617cefcf4f5c418af08a3db7ab3784e1ea16b90515b66b8c\"" Apr 17 02:42:25.322793 containerd[1607]: time="2026-04-17T02:42:25.322511062Z" level=info msg="Container 4d4143450c5a764608cea5a8f807ec0e643a789d2167f5fd03eb4379e74f63d3: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:42:25.323520 systemd[1]: Started cri-containerd-a9381e989684f958dbbf935bfc093a8f16c1f7cd3df37fe78cc039e94c8edbb6.scope - libcontainer container a9381e989684f958dbbf935bfc093a8f16c1f7cd3df37fe78cc039e94c8edbb6. Apr 17 02:42:25.324639 containerd[1607]: time="2026-04-17T02:42:25.323479314Z" level=info msg="connecting to shim 601681eb4035b019617cefcf4f5c418af08a3db7ab3784e1ea16b90515b66b8c" address="unix:///run/containerd/s/60fd6eb7b920138d818de0cb75342e07f054f4a9b0bbec40b6c54026f27697ee" protocol=ttrpc version=3 Apr 17 02:42:25.340241 containerd[1607]: time="2026-04-17T02:42:25.340148689Z" level=info msg="CreateContainer within sandbox \"dff8ff66f25f2b81d0ee59cfb7c6dd3151b521e38a581068a7d258412d1fb62e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d4143450c5a764608cea5a8f807ec0e643a789d2167f5fd03eb4379e74f63d3\"" Apr 17 02:42:25.341769 containerd[1607]: time="2026-04-17T02:42:25.341720926Z" level=info msg="StartContainer for \"4d4143450c5a764608cea5a8f807ec0e643a789d2167f5fd03eb4379e74f63d3\"" Apr 17 02:42:25.344019 containerd[1607]: time="2026-04-17T02:42:25.343896459Z" level=info msg="connecting to shim 4d4143450c5a764608cea5a8f807ec0e643a789d2167f5fd03eb4379e74f63d3" address="unix:///run/containerd/s/390a43576983b0132aed362d0d39f20af0da3340c701c5f0813b925e1a5d93bd" protocol=ttrpc version=3 Apr 17 02:42:25.353244 systemd[1]: Started cri-containerd-601681eb4035b019617cefcf4f5c418af08a3db7ab3784e1ea16b90515b66b8c.scope - libcontainer container 601681eb4035b019617cefcf4f5c418af08a3db7ab3784e1ea16b90515b66b8c. Apr 17 02:42:25.395489 systemd[1]: Started cri-containerd-4d4143450c5a764608cea5a8f807ec0e643a789d2167f5fd03eb4379e74f63d3.scope - libcontainer container 4d4143450c5a764608cea5a8f807ec0e643a789d2167f5fd03eb4379e74f63d3. Apr 17 02:42:25.520166 containerd[1607]: time="2026-04-17T02:42:25.520104189Z" level=info msg="StartContainer for \"a9381e989684f958dbbf935bfc093a8f16c1f7cd3df37fe78cc039e94c8edbb6\" returns successfully" Apr 17 02:42:25.526985 containerd[1607]: time="2026-04-17T02:42:25.526665455Z" level=info msg="StartContainer for \"601681eb4035b019617cefcf4f5c418af08a3db7ab3784e1ea16b90515b66b8c\" returns successfully" Apr 17 02:42:25.582263 containerd[1607]: time="2026-04-17T02:42:25.582065119Z" level=info msg="StartContainer for \"4d4143450c5a764608cea5a8f807ec0e643a789d2167f5fd03eb4379e74f63d3\" returns successfully" Apr 17 02:42:25.897004 kubelet[2406]: E0417 02:42:25.896468 2406 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:42:25.897004 kubelet[2406]: E0417 02:42:25.896564 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:25.961302 kubelet[2406]: E0417 02:42:25.949426 2406 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:42:25.961302 kubelet[2406]: E0417 02:42:25.949787 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:25.962690 kubelet[2406]: E0417 02:42:25.962456 2406 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:42:25.962690 kubelet[2406]: E0417 02:42:25.962671 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:26.857127 kubelet[2406]: I0417 02:42:26.856989 2406 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 02:42:26.967042 kubelet[2406]: E0417 02:42:26.965982 2406 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:42:26.967042 kubelet[2406]: E0417 02:42:26.966210 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:26.967837 kubelet[2406]: E0417 02:42:26.967235 2406 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:42:26.967837 kubelet[2406]: E0417 02:42:26.967346 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:28.211221 kubelet[2406]: E0417 02:42:28.211077 2406 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 02:42:28.311879 kubelet[2406]: I0417 02:42:28.311377 2406 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 17 02:42:28.337123 kubelet[2406]: I0417 02:42:28.336455 2406 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:28.403091 kubelet[2406]: E0417 02:42:28.402367 2406 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:28.403091 kubelet[2406]: I0417 02:42:28.402591 2406 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:28.406419 kubelet[2406]: E0417 02:42:28.406376 2406 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:28.406419 kubelet[2406]: I0417 02:42:28.406415 2406 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 02:42:28.414660 kubelet[2406]: E0417 02:42:28.414431 2406 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 02:42:28.886315 kubelet[2406]: I0417 02:42:28.886057 2406 apiserver.go:52] "Watching apiserver" Apr 17 02:42:28.931452 kubelet[2406]: I0417 02:42:28.931064 2406 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 02:42:29.596475 kubelet[2406]: I0417 02:42:29.596245 2406 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:29.610890 kubelet[2406]: E0417 02:42:29.610675 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:29.974993 kubelet[2406]: E0417 02:42:29.974715 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:30.039899 kubelet[2406]: I0417 02:42:30.039756 2406 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:30.048487 kubelet[2406]: E0417 02:42:30.048239 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:30.643087 systemd[1]: Reload requested from client PID 2701 ('systemctl') (unit session-7.scope)... Apr 17 02:42:30.643127 systemd[1]: Reloading... Apr 17 02:42:30.727069 zram_generator::config[2744]: No configuration found. Apr 17 02:42:30.961713 systemd[1]: Reloading finished in 318 ms. Apr 17 02:42:30.983966 kubelet[2406]: E0417 02:42:30.983701 2406 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:30.997741 kubelet[2406]: I0417 02:42:30.997606 2406 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 02:42:30.997835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:42:31.023673 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 02:42:31.024071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:42:31.024120 systemd[1]: kubelet.service: Consumed 1.897s CPU time, 127M memory peak. Apr 17 02:42:31.026352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:42:31.257458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:42:31.265178 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 02:42:31.327923 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 02:42:31.339600 kubelet[2789]: I0417 02:42:31.339508 2789 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 02:42:31.341034 kubelet[2789]: I0417 02:42:31.340418 2789 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 02:42:31.341034 kubelet[2789]: I0417 02:42:31.340445 2789 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 02:42:31.341034 kubelet[2789]: I0417 02:42:31.340449 2789 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 02:42:31.341034 kubelet[2789]: I0417 02:42:31.340701 2789 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 02:42:31.342002 kubelet[2789]: I0417 02:42:31.341919 2789 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 02:42:31.344052 kubelet[2789]: I0417 02:42:31.344036 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 02:42:31.352417 kubelet[2789]: I0417 02:42:31.352313 2789 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 02:42:31.366545 kubelet[2789]: I0417 02:42:31.366350 2789 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 02:42:31.367532 kubelet[2789]: I0417 02:42:31.366819 2789 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 02:42:31.367686 kubelet[2789]: I0417 02:42:31.366845 2789 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 02:42:31.367686 kubelet[2789]: I0417 02:42:31.367679 2789 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 02:42:31.367686 kubelet[2789]: I0417 02:42:31.367689 2789 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 02:42:31.367915 kubelet[2789]: I0417 02:42:31.367755 2789 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 02:42:31.368069 kubelet[2789]: I0417 02:42:31.368036 2789 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 02:42:31.368210 kubelet[2789]: I0417 02:42:31.368181 2789 kubelet.go:482] "Attempting to sync node with API server" Apr 17 02:42:31.368245 kubelet[2789]: I0417 02:42:31.368211 2789 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 02:42:31.368245 kubelet[2789]: I0417 02:42:31.368242 2789 kubelet.go:394] "Adding apiserver pod source" Apr 17 02:42:31.368298 kubelet[2789]: I0417 02:42:31.368249 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 02:42:31.370095 kubelet[2789]: I0417 02:42:31.370010 2789 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 02:42:31.372315 kubelet[2789]: I0417 02:42:31.372033 2789 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 02:42:31.372315 kubelet[2789]: I0417 02:42:31.372209 2789 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 02:42:31.390340 kubelet[2789]: I0417 02:42:31.390245 2789 server.go:1257] "Started kubelet" Apr 17 02:42:31.391424 kubelet[2789]: I0417 02:42:31.391217 2789 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 02:42:31.391479 kubelet[2789]: I0417 02:42:31.391446 2789 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 02:42:31.391832 kubelet[2789]: I0417 02:42:31.391815 2789 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 02:42:31.392038 kubelet[2789]: I0417 02:42:31.392003 2789 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 02:42:31.392808 kubelet[2789]: I0417 02:42:31.392748 2789 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 02:42:31.394011 kubelet[2789]: I0417 02:42:31.393356 2789 server.go:317] "Adding debug handlers to kubelet server" Apr 17 02:42:31.396214 kubelet[2789]: I0417 02:42:31.396115 2789 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 02:42:31.397844 kubelet[2789]: I0417 02:42:31.396692 2789 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 02:42:31.398103 kubelet[2789]: I0417 02:42:31.398000 2789 reconciler.go:29] "Reconciler: start to sync state" Apr 17 02:42:31.449195 kubelet[2789]: I0417 02:42:31.400421 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 02:42:31.455101 kubelet[2789]: I0417 02:42:31.454103 2789 factory.go:223] Registration of the systemd container factory successfully Apr 17 02:42:31.455101 kubelet[2789]: I0417 02:42:31.455087 2789 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 02:42:31.457042 kubelet[2789]: I0417 02:42:31.457030 2789 factory.go:223] Registration of the containerd container factory successfully Apr 17 02:42:31.474208 kubelet[2789]: I0417 02:42:31.474160 2789 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 02:42:31.475593 kubelet[2789]: I0417 02:42:31.475554 2789 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 02:42:31.475593 kubelet[2789]: I0417 02:42:31.475568 2789 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 02:42:31.475692 kubelet[2789]: I0417 02:42:31.475644 2789 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 02:42:31.475692 kubelet[2789]: E0417 02:42:31.475679 2789 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.510404 2789 cpu_manager.go:225] "Starting" policy="none" Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.510588 2789 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.510808 2789 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.511187 2789 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.511352 2789 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.511369 2789 policy_none.go:50] "Start" Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.511376 2789 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.511387 2789 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.511547 2789 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 02:42:31.511653 kubelet[2789]: I0417 02:42:31.511553 2789 policy_none.go:44] "Start" Apr 17 02:42:31.521316 kubelet[2789]: E0417 02:42:31.521185 2789 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 02:42:31.521778 kubelet[2789]: I0417 02:42:31.521471 2789 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 02:42:31.521778 kubelet[2789]: I0417 02:42:31.521482 2789 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 02:42:31.521778 kubelet[2789]: I0417 02:42:31.521757 2789 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 02:42:31.523094 kubelet[2789]: E0417 02:42:31.523076 2789 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 02:42:31.579805 kubelet[2789]: I0417 02:42:31.579535 2789 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:31.579805 kubelet[2789]: I0417 02:42:31.579609 2789 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:31.579805 kubelet[2789]: I0417 02:42:31.579800 2789 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 02:42:31.603397 kubelet[2789]: E0417 02:42:31.603156 2789 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:31.609325 kubelet[2789]: E0417 02:42:31.608329 2789 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:31.634460 sudo[2830]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 17 02:42:31.634731 sudo[2830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 17 02:42:31.640980 kubelet[2789]: I0417 02:42:31.640744 2789 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 02:42:31.649665 kubelet[2789]: I0417 02:42:31.649535 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:31.649665 kubelet[2789]: I0417 02:42:31.649607 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:31.649665 kubelet[2789]: I0417 02:42:31.649666 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:31.649665 kubelet[2789]: I0417 02:42:31.649687 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 17 02:42:31.649665 kubelet[2789]: I0417 02:42:31.649704 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:31.650375 kubelet[2789]: I0417 02:42:31.649722 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:31.650375 kubelet[2789]: I0417 02:42:31.649740 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:42:31.650375 kubelet[2789]: I0417 02:42:31.649756 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:31.650375 kubelet[2789]: I0417 02:42:31.649774 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:31.652124 kubelet[2789]: I0417 02:42:31.652082 2789 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 17 02:42:31.652221 kubelet[2789]: I0417 02:42:31.652208 2789 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 17 02:42:31.904896 kubelet[2789]: E0417 02:42:31.904173 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:31.904896 kubelet[2789]: E0417 02:42:31.904202 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:31.909420 kubelet[2789]: E0417 02:42:31.909385 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:32.095519 sudo[2830]: pam_unix(sudo:session): session closed for user root Apr 17 02:42:32.370139 kubelet[2789]: I0417 02:42:32.369511 2789 apiserver.go:52] "Watching apiserver" Apr 17 02:42:32.400158 kubelet[2789]: I0417 02:42:32.398907 2789 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 02:42:32.498801 kubelet[2789]: I0417 02:42:32.498310 2789 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 02:42:32.498801 kubelet[2789]: E0417 02:42:32.498503 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:32.498801 kubelet[2789]: I0417 02:42:32.498728 2789 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:32.511718 kubelet[2789]: E0417 02:42:32.511472 2789 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 02:42:32.511718 kubelet[2789]: E0417 02:42:32.511887 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:32.513290 kubelet[2789]: E0417 02:42:32.513146 2789 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:42:32.513431 kubelet[2789]: E0417 02:42:32.513329 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:32.549506 kubelet[2789]: I0417 02:42:32.549345 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.549332808 podStartE2EDuration="2.549332808s" podCreationTimestamp="2026-04-17 02:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:42:32.538432991 +0000 UTC m=+1.267986839" watchObservedRunningTime="2026-04-17 02:42:32.549332808 +0000 UTC m=+1.278886662" Apr 17 02:42:32.556802 kubelet[2789]: I0417 02:42:32.556718 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.556703648 podStartE2EDuration="3.556703648s" podCreationTimestamp="2026-04-17 02:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:42:32.550077787 +0000 UTC m=+1.279631637" watchObservedRunningTime="2026-04-17 02:42:32.556703648 +0000 UTC m=+1.286257513" Apr 17 02:42:32.557081 kubelet[2789]: I0417 02:42:32.556810 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.556807694 podStartE2EDuration="1.556807694s" podCreationTimestamp="2026-04-17 02:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:42:32.556675992 +0000 UTC m=+1.286229851" watchObservedRunningTime="2026-04-17 02:42:32.556807694 +0000 UTC m=+1.286361554" Apr 17 02:42:33.507006 kubelet[2789]: E0417 02:42:33.506799 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:33.510507 kubelet[2789]: E0417 02:42:33.510295 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:33.511314 kubelet[2789]: E0417 02:42:33.511274 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:33.730653 sudo[1813]: pam_unix(sudo:session): session closed for user root Apr 17 02:42:33.732507 sshd[1812]: Connection closed by 10.0.0.1 port 52128 Apr 17 02:42:33.732922 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:33.739212 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:52128.service: Deactivated successfully. Apr 17 02:42:33.742883 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 02:42:33.743151 systemd[1]: session-7.scope: Consumed 4.708s CPU time, 272.6M memory peak. Apr 17 02:42:33.745121 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Apr 17 02:42:33.746532 systemd-logind[1585]: Removed session 7. Apr 17 02:42:34.512030 kubelet[2789]: E0417 02:42:34.511791 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:34.512030 kubelet[2789]: E0417 02:42:34.511890 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:35.743237 kubelet[2789]: E0417 02:42:35.743022 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:37.403599 kubelet[2789]: I0417 02:42:37.403492 2789 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 02:42:37.405577 kubelet[2789]: I0417 02:42:37.405262 2789 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 02:42:37.405748 containerd[1607]: time="2026-04-17T02:42:37.404212618Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 02:42:38.622060 kubelet[2789]: I0417 02:42:38.621802 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-config-path\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.624273 kubelet[2789]: I0417 02:42:38.623059 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-net\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.624273 kubelet[2789]: I0417 02:42:38.623273 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41e24b01-b2ed-43dd-8d0c-d0068adc8948-xtables-lock\") pod \"kube-proxy-kflgm\" (UID: \"41e24b01-b2ed-43dd-8d0c-d0068adc8948\") " pod="kube-system/kube-proxy-kflgm" Apr 17 02:42:38.624273 kubelet[2789]: I0417 02:42:38.623291 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-run\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.624273 kubelet[2789]: I0417 02:42:38.623304 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-xtables-lock\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.624273 kubelet[2789]: I0417 02:42:38.623524 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hostproc\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.624273 kubelet[2789]: I0417 02:42:38.623769 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb101225-f8f7-43eb-ae23-bb96e3813c0a-clustermesh-secrets\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.624928 systemd[1]: Created slice kubepods-besteffort-pod41e24b01_b2ed_43dd_8d0c_d0068adc8948.slice - libcontainer container kubepods-besteffort-pod41e24b01_b2ed_43dd_8d0c_d0068adc8948.slice. Apr 17 02:42:38.629022 kubelet[2789]: I0417 02:42:38.623788 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-kernel\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.629022 kubelet[2789]: I0417 02:42:38.628305 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hubble-tls\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.629022 kubelet[2789]: I0417 02:42:38.628336 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv69p\" (UniqueName: \"kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-kube-api-access-hv69p\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.629022 kubelet[2789]: I0417 02:42:38.628443 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41e24b01-b2ed-43dd-8d0c-d0068adc8948-lib-modules\") pod \"kube-proxy-kflgm\" (UID: \"41e24b01-b2ed-43dd-8d0c-d0068adc8948\") " pod="kube-system/kube-proxy-kflgm" Apr 17 02:42:38.629022 kubelet[2789]: I0417 02:42:38.628465 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphcd\" (UniqueName: \"kubernetes.io/projected/41e24b01-b2ed-43dd-8d0c-d0068adc8948-kube-api-access-xphcd\") pod \"kube-proxy-kflgm\" (UID: \"41e24b01-b2ed-43dd-8d0c-d0068adc8948\") " pod="kube-system/kube-proxy-kflgm" Apr 17 02:42:38.629880 kubelet[2789]: I0417 02:42:38.628482 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-bpf-maps\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.629880 kubelet[2789]: I0417 02:42:38.628499 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-cgroup\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.629880 kubelet[2789]: I0417 02:42:38.628519 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/41e24b01-b2ed-43dd-8d0c-d0068adc8948-kube-proxy\") pod \"kube-proxy-kflgm\" (UID: \"41e24b01-b2ed-43dd-8d0c-d0068adc8948\") " pod="kube-system/kube-proxy-kflgm" Apr 17 02:42:38.629880 kubelet[2789]: I0417 02:42:38.628534 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cni-path\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.629880 kubelet[2789]: I0417 02:42:38.628551 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-etc-cni-netd\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.629880 kubelet[2789]: I0417 02:42:38.628567 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-lib-modules\") pod \"cilium-wxjml\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " pod="kube-system/cilium-wxjml" Apr 17 02:42:38.658015 systemd[1]: Created slice kubepods-burstable-podbb101225_f8f7_43eb_ae23_bb96e3813c0a.slice - libcontainer container kubepods-burstable-podbb101225_f8f7_43eb_ae23_bb96e3813c0a.slice. Apr 17 02:42:38.669473 systemd[1]: Created slice kubepods-besteffort-pod9fbd2673_c278_4c6f_97e7_6522b26103b6.slice - libcontainer container kubepods-besteffort-pod9fbd2673_c278_4c6f_97e7_6522b26103b6.slice. Apr 17 02:42:38.731499 kubelet[2789]: I0417 02:42:38.731322 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqht\" (UniqueName: \"kubernetes.io/projected/9fbd2673-c278-4c6f-97e7-6522b26103b6-kube-api-access-hcqht\") pod \"cilium-operator-78cf5644cb-8jtkv\" (UID: \"9fbd2673-c278-4c6f-97e7-6522b26103b6\") " pod="kube-system/cilium-operator-78cf5644cb-8jtkv" Apr 17 02:42:38.741680 kubelet[2789]: I0417 02:42:38.735375 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fbd2673-c278-4c6f-97e7-6522b26103b6-cilium-config-path\") pod \"cilium-operator-78cf5644cb-8jtkv\" (UID: \"9fbd2673-c278-4c6f-97e7-6522b26103b6\") " pod="kube-system/cilium-operator-78cf5644cb-8jtkv" Apr 17 02:42:38.957394 kubelet[2789]: E0417 02:42:38.957051 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:38.959477 containerd[1607]: time="2026-04-17T02:42:38.959428202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kflgm,Uid:41e24b01-b2ed-43dd-8d0c-d0068adc8948,Namespace:kube-system,Attempt:0,}" Apr 17 02:42:38.968664 kubelet[2789]: E0417 02:42:38.968418 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:38.969422 containerd[1607]: time="2026-04-17T02:42:38.969201705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxjml,Uid:bb101225-f8f7-43eb-ae23-bb96e3813c0a,Namespace:kube-system,Attempt:0,}" Apr 17 02:42:39.004884 kubelet[2789]: E0417 02:42:39.004611 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:39.005903 containerd[1607]: time="2026-04-17T02:42:39.005432403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-8jtkv,Uid:9fbd2673-c278-4c6f-97e7-6522b26103b6,Namespace:kube-system,Attempt:0,}" Apr 17 02:42:39.023365 containerd[1607]: time="2026-04-17T02:42:39.023196590Z" level=info msg="connecting to shim f0a6ee64afc8e62d65766f39b01e9b058ec5bc3eb7b53dbde8fc536ff33e56e9" address="unix:///run/containerd/s/0e85f2e0132e8d07fd5be2bab61237bdd2484e68a84200070e5c2a1e856109d9" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:42:39.031616 containerd[1607]: time="2026-04-17T02:42:39.031533798Z" level=info msg="connecting to shim e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa" address="unix:///run/containerd/s/a99f21516d305dbf68106fbb2844d57a617bb1aeba797487618b3a3455376bc3" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:42:39.050041 containerd[1607]: time="2026-04-17T02:42:39.048722204Z" level=info msg="connecting to shim 6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7" address="unix:///run/containerd/s/c2a479211c0cfa6ce649e3c9dd96bc4d8cf8df7e146ee0d9114d3e4eba3e0dba" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:42:39.068693 systemd[1]: Started cri-containerd-f0a6ee64afc8e62d65766f39b01e9b058ec5bc3eb7b53dbde8fc536ff33e56e9.scope - libcontainer container f0a6ee64afc8e62d65766f39b01e9b058ec5bc3eb7b53dbde8fc536ff33e56e9. Apr 17 02:42:39.094220 systemd[1]: Started cri-containerd-e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa.scope - libcontainer container e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa. Apr 17 02:42:39.104995 systemd[1]: Started cri-containerd-6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7.scope - libcontainer container 6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7. Apr 17 02:42:39.145451 containerd[1607]: time="2026-04-17T02:42:39.145225246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kflgm,Uid:41e24b01-b2ed-43dd-8d0c-d0068adc8948,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0a6ee64afc8e62d65766f39b01e9b058ec5bc3eb7b53dbde8fc536ff33e56e9\"" Apr 17 02:42:39.150123 kubelet[2789]: E0417 02:42:39.149622 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:39.151354 containerd[1607]: time="2026-04-17T02:42:39.151113599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxjml,Uid:bb101225-f8f7-43eb-ae23-bb96e3813c0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\"" Apr 17 02:42:39.153327 kubelet[2789]: E0417 02:42:39.153110 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:39.164505 containerd[1607]: time="2026-04-17T02:42:39.163590680Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 17 02:42:39.164505 containerd[1607]: time="2026-04-17T02:42:39.164342446Z" level=info msg="CreateContainer within sandbox \"f0a6ee64afc8e62d65766f39b01e9b058ec5bc3eb7b53dbde8fc536ff33e56e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 02:42:39.185288 containerd[1607]: time="2026-04-17T02:42:39.185194225Z" level=info msg="Container a725c05d3ec75d71cc0689127c48ad6e88828f0816ac802fdc6a76745ab6fa2f: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:42:39.198973 containerd[1607]: time="2026-04-17T02:42:39.198760974Z" level=info msg="CreateContainer within sandbox \"f0a6ee64afc8e62d65766f39b01e9b058ec5bc3eb7b53dbde8fc536ff33e56e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a725c05d3ec75d71cc0689127c48ad6e88828f0816ac802fdc6a76745ab6fa2f\"" Apr 17 02:42:39.204333 containerd[1607]: time="2026-04-17T02:42:39.204215244Z" level=info msg="StartContainer for \"a725c05d3ec75d71cc0689127c48ad6e88828f0816ac802fdc6a76745ab6fa2f\"" Apr 17 02:42:39.212031 containerd[1607]: time="2026-04-17T02:42:39.211389749Z" level=info msg="connecting to shim a725c05d3ec75d71cc0689127c48ad6e88828f0816ac802fdc6a76745ab6fa2f" address="unix:///run/containerd/s/0e85f2e0132e8d07fd5be2bab61237bdd2484e68a84200070e5c2a1e856109d9" protocol=ttrpc version=3 Apr 17 02:42:39.212031 containerd[1607]: time="2026-04-17T02:42:39.211602746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-8jtkv,Uid:9fbd2673-c278-4c6f-97e7-6522b26103b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\"" Apr 17 02:42:39.213088 kubelet[2789]: E0417 02:42:39.212856 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:39.247152 systemd[1]: Started cri-containerd-a725c05d3ec75d71cc0689127c48ad6e88828f0816ac802fdc6a76745ab6fa2f.scope - libcontainer container a725c05d3ec75d71cc0689127c48ad6e88828f0816ac802fdc6a76745ab6fa2f. Apr 17 02:42:39.338024 containerd[1607]: time="2026-04-17T02:42:39.337521110Z" level=info msg="StartContainer for \"a725c05d3ec75d71cc0689127c48ad6e88828f0816ac802fdc6a76745ab6fa2f\" returns successfully" Apr 17 02:42:39.457546 kubelet[2789]: E0417 02:42:39.457333 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:39.538291 kubelet[2789]: E0417 02:42:39.536837 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:39.562235 kubelet[2789]: I0417 02:42:39.561722 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-kflgm" podStartSLOduration=1.561705481 podStartE2EDuration="1.561705481s" podCreationTimestamp="2026-04-17 02:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:42:39.561493022 +0000 UTC m=+8.291046876" watchObservedRunningTime="2026-04-17 02:42:39.561705481 +0000 UTC m=+8.291259341" Apr 17 02:42:41.134506 update_engine[1588]: I20260417 02:42:41.132086 1588 update_attempter.cc:509] Updating boot flags... Apr 17 02:42:42.625131 kubelet[2789]: E0417 02:42:42.622497 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:43.588537 kubelet[2789]: E0417 02:42:43.588213 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:45.749852 kubelet[2789]: E0417 02:42:45.749590 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:49.580407 kubelet[2789]: E0417 02:42:49.577253 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:54.210280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947813318.mount: Deactivated successfully. Apr 17 02:42:58.688133 containerd[1607]: time="2026-04-17T02:42:58.687135653Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:58.694636 containerd[1607]: time="2026-04-17T02:42:58.692053964Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 17 02:42:58.696095 containerd[1607]: time="2026-04-17T02:42:58.696019750Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:42:58.698087 containerd[1607]: time="2026-04-17T02:42:58.697488684Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 19.533690591s" Apr 17 02:42:58.698087 containerd[1607]: time="2026-04-17T02:42:58.697535619Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 17 02:42:58.751046 containerd[1607]: time="2026-04-17T02:42:58.750630696Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 17 02:42:58.766757 containerd[1607]: time="2026-04-17T02:42:58.766459296Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 02:42:58.809221 containerd[1607]: time="2026-04-17T02:42:58.809050651Z" level=info msg="Container 9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:42:58.832213 containerd[1607]: time="2026-04-17T02:42:58.831850809Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe\"" Apr 17 02:42:58.833756 containerd[1607]: time="2026-04-17T02:42:58.833659155Z" level=info msg="StartContainer for \"9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe\"" Apr 17 02:42:58.837991 containerd[1607]: time="2026-04-17T02:42:58.837101791Z" level=info msg="connecting to shim 9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe" address="unix:///run/containerd/s/a99f21516d305dbf68106fbb2844d57a617bb1aeba797487618b3a3455376bc3" protocol=ttrpc version=3 Apr 17 02:42:58.924225 systemd[1]: Started cri-containerd-9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe.scope - libcontainer container 9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe. Apr 17 02:42:59.005250 containerd[1607]: time="2026-04-17T02:42:59.003101438Z" level=info msg="StartContainer for \"9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe\" returns successfully" Apr 17 02:42:59.025018 systemd[1]: cri-containerd-9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe.scope: Deactivated successfully. Apr 17 02:42:59.025369 systemd[1]: cri-containerd-9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe.scope: Consumed 52ms CPU time, 7M memory peak, 4K read from disk, 2.8M written to disk. Apr 17 02:42:59.030042 containerd[1607]: time="2026-04-17T02:42:59.029902494Z" level=info msg="received container exit event container_id:\"9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe\" id:\"9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe\" pid:3241 exited_at:{seconds:1776393779 nanos:28381592}" Apr 17 02:42:59.229230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe-rootfs.mount: Deactivated successfully. Apr 17 02:42:59.602143 kubelet[2789]: E0417 02:42:59.601231 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:59.628161 containerd[1607]: time="2026-04-17T02:42:59.627698488Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 02:42:59.659270 containerd[1607]: time="2026-04-17T02:42:59.659098203Z" level=info msg="Container 349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:42:59.685433 containerd[1607]: time="2026-04-17T02:42:59.685216465Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e\"" Apr 17 02:42:59.689889 containerd[1607]: time="2026-04-17T02:42:59.688815673Z" level=info msg="StartContainer for \"349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e\"" Apr 17 02:42:59.694387 containerd[1607]: time="2026-04-17T02:42:59.693824276Z" level=info msg="connecting to shim 349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e" address="unix:///run/containerd/s/a99f21516d305dbf68106fbb2844d57a617bb1aeba797487618b3a3455376bc3" protocol=ttrpc version=3 Apr 17 02:42:59.767239 systemd[1]: Started cri-containerd-349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e.scope - libcontainer container 349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e. Apr 17 02:42:59.827230 containerd[1607]: time="2026-04-17T02:42:59.827086759Z" level=info msg="StartContainer for \"349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e\" returns successfully" Apr 17 02:42:59.845338 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 02:42:59.845666 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 02:42:59.847477 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 17 02:42:59.849445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 02:42:59.851271 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 17 02:42:59.851780 containerd[1607]: time="2026-04-17T02:42:59.851638265Z" level=info msg="received container exit event container_id:\"349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e\" id:\"349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e\" pid:3283 exited_at:{seconds:1776393779 nanos:851359769}" Apr 17 02:42:59.854241 systemd[1]: cri-containerd-349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e.scope: Deactivated successfully. Apr 17 02:42:59.918661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e-rootfs.mount: Deactivated successfully. Apr 17 02:42:59.944346 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 02:43:00.608023 kubelet[2789]: E0417 02:43:00.607827 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:00.620502 containerd[1607]: time="2026-04-17T02:43:00.620312615Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 02:43:00.693035 containerd[1607]: time="2026-04-17T02:43:00.692649603Z" level=info msg="Container 3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:43:00.715914 containerd[1607]: time="2026-04-17T02:43:00.715760703Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5\"" Apr 17 02:43:00.720396 containerd[1607]: time="2026-04-17T02:43:00.719688983Z" level=info msg="StartContainer for \"3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5\"" Apr 17 02:43:00.725275 containerd[1607]: time="2026-04-17T02:43:00.725229875Z" level=info msg="connecting to shim 3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5" address="unix:///run/containerd/s/a99f21516d305dbf68106fbb2844d57a617bb1aeba797487618b3a3455376bc3" protocol=ttrpc version=3 Apr 17 02:43:00.794626 systemd[1]: Started cri-containerd-3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5.scope - libcontainer container 3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5. Apr 17 02:43:00.800330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1597740194.mount: Deactivated successfully. Apr 17 02:43:00.972444 systemd[1]: cri-containerd-3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5.scope: Deactivated successfully. Apr 17 02:43:00.973918 containerd[1607]: time="2026-04-17T02:43:00.973328325Z" level=info msg="received container exit event container_id:\"3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5\" id:\"3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5\" pid:3345 exited_at:{seconds:1776393780 nanos:973110184}" Apr 17 02:43:00.982988 containerd[1607]: time="2026-04-17T02:43:00.982750617Z" level=info msg="StartContainer for \"3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5\" returns successfully" Apr 17 02:43:01.067352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5-rootfs.mount: Deactivated successfully. Apr 17 02:43:01.555000 containerd[1607]: time="2026-04-17T02:43:01.554576618Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:43:01.557027 containerd[1607]: time="2026-04-17T02:43:01.556743778Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 17 02:43:01.566047 containerd[1607]: time="2026-04-17T02:43:01.565657257Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:43:01.572276 containerd[1607]: time="2026-04-17T02:43:01.571905873Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.821138398s" Apr 17 02:43:01.572276 containerd[1607]: time="2026-04-17T02:43:01.572205455Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 17 02:43:01.614080 containerd[1607]: time="2026-04-17T02:43:01.613783906Z" level=info msg="CreateContainer within sandbox \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 17 02:43:01.621385 kubelet[2789]: E0417 02:43:01.621097 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:01.627431 containerd[1607]: time="2026-04-17T02:43:01.627379613Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 02:43:01.636250 containerd[1607]: time="2026-04-17T02:43:01.636144240Z" level=info msg="Container 7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:43:01.643553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3397069986.mount: Deactivated successfully. Apr 17 02:43:01.658455 containerd[1607]: time="2026-04-17T02:43:01.658259752Z" level=info msg="CreateContainer within sandbox \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\"" Apr 17 02:43:01.660870 containerd[1607]: time="2026-04-17T02:43:01.660504391Z" level=info msg="StartContainer for \"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\"" Apr 17 02:43:01.673030 containerd[1607]: time="2026-04-17T02:43:01.672684683Z" level=info msg="Container de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:43:01.689857 containerd[1607]: time="2026-04-17T02:43:01.689496370Z" level=info msg="connecting to shim 7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7" address="unix:///run/containerd/s/c2a479211c0cfa6ce649e3c9dd96bc4d8cf8df7e146ee0d9114d3e4eba3e0dba" protocol=ttrpc version=3 Apr 17 02:43:01.772595 containerd[1607]: time="2026-04-17T02:43:01.772357769Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67\"" Apr 17 02:43:01.776226 containerd[1607]: time="2026-04-17T02:43:01.774834433Z" level=info msg="StartContainer for \"de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67\"" Apr 17 02:43:01.785751 containerd[1607]: time="2026-04-17T02:43:01.785618505Z" level=info msg="connecting to shim de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67" address="unix:///run/containerd/s/a99f21516d305dbf68106fbb2844d57a617bb1aeba797487618b3a3455376bc3" protocol=ttrpc version=3 Apr 17 02:43:01.805128 systemd[1]: Started cri-containerd-7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7.scope - libcontainer container 7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7. Apr 17 02:43:01.836590 systemd[1]: Started cri-containerd-de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67.scope - libcontainer container de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67. Apr 17 02:43:01.932665 containerd[1607]: time="2026-04-17T02:43:01.932224833Z" level=info msg="StartContainer for \"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\" returns successfully" Apr 17 02:43:01.955707 systemd[1]: cri-containerd-de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67.scope: Deactivated successfully. Apr 17 02:43:01.961469 containerd[1607]: time="2026-04-17T02:43:01.961333996Z" level=info msg="received container exit event container_id:\"de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67\" id:\"de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67\" pid:3406 exited_at:{seconds:1776393781 nanos:960813588}" Apr 17 02:43:01.985501 containerd[1607]: time="2026-04-17T02:43:01.985263318Z" level=info msg="StartContainer for \"de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67\" returns successfully" Apr 17 02:43:02.189691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67-rootfs.mount: Deactivated successfully. Apr 17 02:43:02.669528 kubelet[2789]: E0417 02:43:02.668851 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:02.709440 kubelet[2789]: E0417 02:43:02.708890 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:02.760079 containerd[1607]: time="2026-04-17T02:43:02.759687347Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 02:43:02.870887 containerd[1607]: time="2026-04-17T02:43:02.870779974Z" level=info msg="Container 6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:43:02.884673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040994849.mount: Deactivated successfully. Apr 17 02:43:02.893006 kubelet[2789]: I0417 02:43:02.892799 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-8jtkv" podStartSLOduration=2.532077344 podStartE2EDuration="24.892786479s" podCreationTimestamp="2026-04-17 02:42:38 +0000 UTC" firstStartedPulling="2026-04-17 02:42:39.214876911 +0000 UTC m=+7.944430772" lastFinishedPulling="2026-04-17 02:43:01.575586056 +0000 UTC m=+30.305139907" observedRunningTime="2026-04-17 02:43:02.892442797 +0000 UTC m=+31.621996647" watchObservedRunningTime="2026-04-17 02:43:02.892786479 +0000 UTC m=+31.622340340" Apr 17 02:43:02.989354 containerd[1607]: time="2026-04-17T02:43:02.988039135Z" level=info msg="CreateContainer within sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\"" Apr 17 02:43:03.008568 containerd[1607]: time="2026-04-17T02:43:03.007657726Z" level=info msg="StartContainer for \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\"" Apr 17 02:43:03.032472 containerd[1607]: time="2026-04-17T02:43:03.031865900Z" level=info msg="connecting to shim 6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9" address="unix:///run/containerd/s/a99f21516d305dbf68106fbb2844d57a617bb1aeba797487618b3a3455376bc3" protocol=ttrpc version=3 Apr 17 02:43:03.144383 systemd[1]: Started cri-containerd-6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9.scope - libcontainer container 6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9. Apr 17 02:43:03.620626 containerd[1607]: time="2026-04-17T02:43:03.620409225Z" level=info msg="StartContainer for \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\" returns successfully" Apr 17 02:43:03.755145 kubelet[2789]: E0417 02:43:03.755108 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:04.084053 kubelet[2789]: I0417 02:43:04.083510 2789 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 17 02:43:04.372550 kubelet[2789]: E0417 02:43:04.371587 2789 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-gt9t2\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" pod="kube-system/coredns-7d764666f9-gt9t2" Apr 17 02:43:04.388462 systemd[1]: Created slice kubepods-burstable-podc80fcc5d_6ee3_49bd_829a_1182185f052a.slice - libcontainer container kubepods-burstable-podc80fcc5d_6ee3_49bd_829a_1182185f052a.slice. Apr 17 02:43:04.403529 kubelet[2789]: E0417 02:43:04.403413 2789 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-gt9t2\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" pod="kube-system/coredns-7d764666f9-gt9t2" Apr 17 02:43:04.404454 kubelet[2789]: E0417 02:43:04.404233 2789 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Apr 17 02:43:04.417208 systemd[1]: Created slice kubepods-burstable-podf6941710_2162_4edf_9e32_19e4628f8551.slice - libcontainer container kubepods-burstable-podf6941710_2162_4edf_9e32_19e4628f8551.slice. Apr 17 02:43:04.467808 kubelet[2789]: I0417 02:43:04.467445 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6941710-2162-4edf-9e32-19e4628f8551-config-volume\") pod \"coredns-7d764666f9-bxqjr\" (UID: \"f6941710-2162-4edf-9e32-19e4628f8551\") " pod="kube-system/coredns-7d764666f9-bxqjr" Apr 17 02:43:04.467808 kubelet[2789]: I0417 02:43:04.467650 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b22c\" (UniqueName: \"kubernetes.io/projected/f6941710-2162-4edf-9e32-19e4628f8551-kube-api-access-2b22c\") pod \"coredns-7d764666f9-bxqjr\" (UID: \"f6941710-2162-4edf-9e32-19e4628f8551\") " pod="kube-system/coredns-7d764666f9-bxqjr" Apr 17 02:43:04.467808 kubelet[2789]: I0417 02:43:04.467676 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c80fcc5d-6ee3-49bd-829a-1182185f052a-config-volume\") pod \"coredns-7d764666f9-gt9t2\" (UID: \"c80fcc5d-6ee3-49bd-829a-1182185f052a\") " pod="kube-system/coredns-7d764666f9-gt9t2" Apr 17 02:43:04.467808 kubelet[2789]: I0417 02:43:04.467695 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g9mv\" (UniqueName: \"kubernetes.io/projected/c80fcc5d-6ee3-49bd-829a-1182185f052a-kube-api-access-7g9mv\") pod \"coredns-7d764666f9-gt9t2\" (UID: \"c80fcc5d-6ee3-49bd-829a-1182185f052a\") " pod="kube-system/coredns-7d764666f9-gt9t2" Apr 17 02:43:04.757683 kubelet[2789]: E0417 02:43:04.757514 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:05.324446 kubelet[2789]: I0417 02:43:05.324134 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-wxjml" podStartSLOduration=3.76936926 podStartE2EDuration="27.324115282s" podCreationTimestamp="2026-04-17 02:42:38 +0000 UTC" firstStartedPulling="2026-04-17 02:42:39.158834726 +0000 UTC m=+7.888388574" lastFinishedPulling="2026-04-17 02:43:02.71358074 +0000 UTC m=+31.443134596" observedRunningTime="2026-04-17 02:43:05.271779739 +0000 UTC m=+34.001333605" watchObservedRunningTime="2026-04-17 02:43:05.324115282 +0000 UTC m=+34.053669142" Apr 17 02:43:05.616490 kubelet[2789]: E0417 02:43:05.615343 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:05.616867 containerd[1607]: time="2026-04-17T02:43:05.616691764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gt9t2,Uid:c80fcc5d-6ee3-49bd-829a-1182185f052a,Namespace:kube-system,Attempt:0,}" Apr 17 02:43:05.638234 kubelet[2789]: E0417 02:43:05.637818 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:05.642377 containerd[1607]: time="2026-04-17T02:43:05.642238049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bxqjr,Uid:f6941710-2162-4edf-9e32-19e4628f8551,Namespace:kube-system,Attempt:0,}" Apr 17 02:43:05.809578 kubelet[2789]: E0417 02:43:05.809030 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:07.529471 systemd-networkd[1490]: cilium_host: Link UP Apr 17 02:43:07.533442 systemd-networkd[1490]: cilium_net: Link UP Apr 17 02:43:07.535924 systemd-networkd[1490]: cilium_net: Gained carrier Apr 17 02:43:07.536389 systemd-networkd[1490]: cilium_host: Gained carrier Apr 17 02:43:07.615453 systemd-networkd[1490]: cilium_net: Gained IPv6LL Apr 17 02:43:07.751403 systemd-networkd[1490]: cilium_vxlan: Link UP Apr 17 02:43:07.751410 systemd-networkd[1490]: cilium_vxlan: Gained carrier Apr 17 02:43:07.830302 systemd-networkd[1490]: cilium_host: Gained IPv6LL Apr 17 02:43:08.146328 kernel: NET: Registered PF_ALG protocol family Apr 17 02:43:09.417627 systemd-networkd[1490]: lxc_health: Link UP Apr 17 02:43:09.485557 systemd-networkd[1490]: lxc_health: Gained carrier Apr 17 02:43:09.574477 systemd-networkd[1490]: cilium_vxlan: Gained IPv6LL Apr 17 02:43:09.909746 systemd-networkd[1490]: lxc750375e7a886: Link UP Apr 17 02:43:09.913229 kernel: eth0: renamed from tmp9beaa Apr 17 02:43:09.915636 systemd-networkd[1490]: lxcb5c977510b77: Link UP Apr 17 02:43:09.921915 systemd-networkd[1490]: lxc750375e7a886: Gained carrier Apr 17 02:43:09.924322 kernel: eth0: renamed from tmpc7734 Apr 17 02:43:09.928626 systemd-networkd[1490]: lxcb5c977510b77: Gained carrier Apr 17 02:43:10.970997 kubelet[2789]: E0417 02:43:10.970354 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:10.983604 systemd-networkd[1490]: lxc750375e7a886: Gained IPv6LL Apr 17 02:43:11.110525 systemd-networkd[1490]: lxc_health: Gained IPv6LL Apr 17 02:43:11.239372 systemd-networkd[1490]: lxcb5c977510b77: Gained IPv6LL Apr 17 02:43:11.846564 kubelet[2789]: E0417 02:43:11.846522 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:12.849074 kubelet[2789]: E0417 02:43:12.849006 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:16.620481 containerd[1607]: time="2026-04-17T02:43:16.620221337Z" level=info msg="connecting to shim 9beaaa8becb509d905b688940fbd02653bd90a913b2c7ebe6aa1b9fde45efcf9" address="unix:///run/containerd/s/e9094ac7432f799905f28a093519e9bb26ac9e94eb8806f55a3845501322b734" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:43:16.621528 containerd[1607]: time="2026-04-17T02:43:16.621477896Z" level=info msg="connecting to shim c77349072cfd04e6822fb9e9b1e24033214dd92ec9c47edcc3185866ffb7b1bd" address="unix:///run/containerd/s/76fad4671a66828d165c8e74e03a59dcdff52463685e04f45a644c1dc0ff0b61" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:43:16.662235 systemd[1]: Started cri-containerd-c77349072cfd04e6822fb9e9b1e24033214dd92ec9c47edcc3185866ffb7b1bd.scope - libcontainer container c77349072cfd04e6822fb9e9b1e24033214dd92ec9c47edcc3185866ffb7b1bd. Apr 17 02:43:16.695159 systemd[1]: Started cri-containerd-9beaaa8becb509d905b688940fbd02653bd90a913b2c7ebe6aa1b9fde45efcf9.scope - libcontainer container 9beaaa8becb509d905b688940fbd02653bd90a913b2c7ebe6aa1b9fde45efcf9. Apr 17 02:43:16.716608 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:43:16.735089 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:43:16.813068 containerd[1607]: time="2026-04-17T02:43:16.812848757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bxqjr,Uid:f6941710-2162-4edf-9e32-19e4628f8551,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77349072cfd04e6822fb9e9b1e24033214dd92ec9c47edcc3185866ffb7b1bd\"" Apr 17 02:43:16.815300 kubelet[2789]: E0417 02:43:16.814861 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:16.816734 containerd[1607]: time="2026-04-17T02:43:16.816592424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gt9t2,Uid:c80fcc5d-6ee3-49bd-829a-1182185f052a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9beaaa8becb509d905b688940fbd02653bd90a913b2c7ebe6aa1b9fde45efcf9\"" Apr 17 02:43:16.818197 kubelet[2789]: E0417 02:43:16.818145 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:16.843147 containerd[1607]: time="2026-04-17T02:43:16.842444371Z" level=info msg="CreateContainer within sandbox \"9beaaa8becb509d905b688940fbd02653bd90a913b2c7ebe6aa1b9fde45efcf9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 02:43:16.848070 containerd[1607]: time="2026-04-17T02:43:16.847855533Z" level=info msg="CreateContainer within sandbox \"c77349072cfd04e6822fb9e9b1e24033214dd92ec9c47edcc3185866ffb7b1bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 02:43:16.884137 containerd[1607]: time="2026-04-17T02:43:16.883679395Z" level=info msg="Container e5707c3d8568a549e739629456d7d6cb8aeb0a1dc92e216110c544606d359ac9: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:43:16.888709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514239159.mount: Deactivated successfully. Apr 17 02:43:16.967695 containerd[1607]: time="2026-04-17T02:43:16.967597963Z" level=info msg="CreateContainer within sandbox \"9beaaa8becb509d905b688940fbd02653bd90a913b2c7ebe6aa1b9fde45efcf9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5707c3d8568a549e739629456d7d6cb8aeb0a1dc92e216110c544606d359ac9\"" Apr 17 02:43:16.970037 containerd[1607]: time="2026-04-17T02:43:16.969861908Z" level=info msg="StartContainer for \"e5707c3d8568a549e739629456d7d6cb8aeb0a1dc92e216110c544606d359ac9\"" Apr 17 02:43:16.972328 containerd[1607]: time="2026-04-17T02:43:16.972159285Z" level=info msg="connecting to shim e5707c3d8568a549e739629456d7d6cb8aeb0a1dc92e216110c544606d359ac9" address="unix:///run/containerd/s/e9094ac7432f799905f28a093519e9bb26ac9e94eb8806f55a3845501322b734" protocol=ttrpc version=3 Apr 17 02:43:16.987772 containerd[1607]: time="2026-04-17T02:43:16.986492821Z" level=info msg="Container 4ef87bad020130b2adb574cabc53813864be11724462ce433ffed7fc9b15316a: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:43:17.005624 containerd[1607]: time="2026-04-17T02:43:17.005459649Z" level=info msg="CreateContainer within sandbox \"c77349072cfd04e6822fb9e9b1e24033214dd92ec9c47edcc3185866ffb7b1bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ef87bad020130b2adb574cabc53813864be11724462ce433ffed7fc9b15316a\"" Apr 17 02:43:17.010130 containerd[1607]: time="2026-04-17T02:43:17.009995240Z" level=info msg="StartContainer for \"4ef87bad020130b2adb574cabc53813864be11724462ce433ffed7fc9b15316a\"" Apr 17 02:43:17.011291 containerd[1607]: time="2026-04-17T02:43:17.011262080Z" level=info msg="connecting to shim 4ef87bad020130b2adb574cabc53813864be11724462ce433ffed7fc9b15316a" address="unix:///run/containerd/s/76fad4671a66828d165c8e74e03a59dcdff52463685e04f45a644c1dc0ff0b61" protocol=ttrpc version=3 Apr 17 02:43:17.014137 systemd[1]: Started cri-containerd-e5707c3d8568a549e739629456d7d6cb8aeb0a1dc92e216110c544606d359ac9.scope - libcontainer container e5707c3d8568a549e739629456d7d6cb8aeb0a1dc92e216110c544606d359ac9. Apr 17 02:43:17.048283 systemd[1]: Started cri-containerd-4ef87bad020130b2adb574cabc53813864be11724462ce433ffed7fc9b15316a.scope - libcontainer container 4ef87bad020130b2adb574cabc53813864be11724462ce433ffed7fc9b15316a. Apr 17 02:43:17.090537 containerd[1607]: time="2026-04-17T02:43:17.090082398Z" level=info msg="StartContainer for \"e5707c3d8568a549e739629456d7d6cb8aeb0a1dc92e216110c544606d359ac9\" returns successfully" Apr 17 02:43:17.136321 containerd[1607]: time="2026-04-17T02:43:17.135166877Z" level=info msg="StartContainer for \"4ef87bad020130b2adb574cabc53813864be11724462ce433ffed7fc9b15316a\" returns successfully" Apr 17 02:43:18.000785 kubelet[2789]: E0417 02:43:18.000645 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:18.023715 kubelet[2789]: E0417 02:43:18.023491 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:18.045055 kubelet[2789]: I0417 02:43:18.044163 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-bxqjr" podStartSLOduration=40.044151666 podStartE2EDuration="40.044151666s" podCreationTimestamp="2026-04-17 02:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:43:18.043624772 +0000 UTC m=+46.773178625" watchObservedRunningTime="2026-04-17 02:43:18.044151666 +0000 UTC m=+46.773705526" Apr 17 02:43:18.087462 kubelet[2789]: I0417 02:43:18.086865 2789 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-gt9t2" podStartSLOduration=40.086853621 podStartE2EDuration="40.086853621s" podCreationTimestamp="2026-04-17 02:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:43:18.085442155 +0000 UTC m=+46.814996004" watchObservedRunningTime="2026-04-17 02:43:18.086853621 +0000 UTC m=+46.816407481" Apr 17 02:43:19.028906 kubelet[2789]: E0417 02:43:19.028686 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:19.028906 kubelet[2789]: E0417 02:43:19.028781 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:20.042702 kubelet[2789]: E0417 02:43:20.042501 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:20.046885 kubelet[2789]: E0417 02:43:20.042611 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:23.166292 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:56824.service - OpenSSH per-connection server daemon (10.0.0.1:56824). Apr 17 02:43:23.286074 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 56824 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:43:23.292697 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:43:23.303285 systemd-logind[1585]: New session 8 of user core. Apr 17 02:43:23.314249 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 02:43:23.532610 sshd[4149]: Connection closed by 10.0.0.1 port 56824 Apr 17 02:43:23.534139 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Apr 17 02:43:23.539278 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:56824.service: Deactivated successfully. Apr 17 02:43:23.543462 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 02:43:23.549132 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Apr 17 02:43:23.553323 systemd-logind[1585]: Removed session 8. Apr 17 02:43:28.559375 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:56840.service - OpenSSH per-connection server daemon (10.0.0.1:56840). Apr 17 02:43:28.758765 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 56840 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:43:28.761189 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:43:28.874453 systemd-logind[1585]: New session 9 of user core. Apr 17 02:43:28.897388 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 02:43:29.399474 sshd[4169]: Connection closed by 10.0.0.1 port 56840 Apr 17 02:43:29.400028 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Apr 17 02:43:29.408698 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:56840.service: Deactivated successfully. Apr 17 02:43:29.419760 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 02:43:29.421504 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Apr 17 02:43:29.424506 systemd-logind[1585]: Removed session 9. Apr 17 02:43:34.432979 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:55668.service - OpenSSH per-connection server daemon (10.0.0.1:55668). Apr 17 02:43:34.626760 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 55668 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:43:34.629355 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:43:34.642268 systemd-logind[1585]: New session 10 of user core. Apr 17 02:43:34.657180 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 02:43:35.045224 sshd[4188]: Connection closed by 10.0.0.1 port 55668 Apr 17 02:43:35.051449 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Apr 17 02:43:35.142660 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:55668.service: Deactivated successfully. Apr 17 02:43:35.160222 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 02:43:35.203731 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Apr 17 02:43:35.213159 systemd-logind[1585]: Removed session 10. Apr 17 02:43:40.163578 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:56310.service - OpenSSH per-connection server daemon (10.0.0.1:56310). Apr 17 02:43:40.244906 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 56310 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:43:40.253124 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:43:40.290516 systemd-logind[1585]: New session 11 of user core. Apr 17 02:43:40.303625 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 02:43:40.556753 sshd[4209]: Connection closed by 10.0.0.1 port 56310 Apr 17 02:43:40.557556 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Apr 17 02:43:40.573906 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:56310.service: Deactivated successfully. Apr 17 02:43:40.591539 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 02:43:40.592588 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Apr 17 02:43:40.595074 systemd-logind[1585]: Removed session 11. Apr 17 02:43:45.615128 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:56438.service - OpenSSH per-connection server daemon (10.0.0.1:56438). Apr 17 02:43:45.880529 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 56438 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:43:45.898841 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:43:45.928447 systemd-logind[1585]: New session 12 of user core. Apr 17 02:43:45.937514 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 02:43:46.525148 sshd[4228]: Connection closed by 10.0.0.1 port 56438 Apr 17 02:43:46.525900 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Apr 17 02:43:46.538806 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:56438.service: Deactivated successfully. Apr 17 02:43:46.542386 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 02:43:46.544499 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Apr 17 02:43:46.545851 systemd-logind[1585]: Removed session 12. Apr 17 02:43:50.484073 kubelet[2789]: E0417 02:43:50.483737 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:51.549606 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:51432.service - OpenSSH per-connection server daemon (10.0.0.1:51432). Apr 17 02:43:51.731220 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 51432 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:43:51.733309 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:43:51.771623 systemd-logind[1585]: New session 13 of user core. Apr 17 02:43:51.786016 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 02:43:52.210588 sshd[4245]: Connection closed by 10.0.0.1 port 51432 Apr 17 02:43:52.211060 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Apr 17 02:43:52.224966 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:51432.service: Deactivated successfully. Apr 17 02:43:52.228436 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 02:43:52.229807 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Apr 17 02:43:52.236007 systemd-logind[1585]: Removed session 13. Apr 17 02:43:52.491220 kubelet[2789]: E0417 02:43:52.490608 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:54.488306 kubelet[2789]: E0417 02:43:54.488109 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:43:57.245241 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:51568.service - OpenSSH per-connection server daemon (10.0.0.1:51568). Apr 17 02:43:57.522202 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 51568 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:43:57.523923 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:43:57.529667 systemd-logind[1585]: New session 14 of user core. Apr 17 02:43:57.538470 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 02:43:57.665170 sshd[4262]: Connection closed by 10.0.0.1 port 51568 Apr 17 02:43:57.665637 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Apr 17 02:43:57.669364 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:51568.service: Deactivated successfully. Apr 17 02:43:57.692023 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 02:43:57.693391 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Apr 17 02:43:57.694845 systemd-logind[1585]: Removed session 14. Apr 17 02:44:02.713328 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:49774.service - OpenSSH per-connection server daemon (10.0.0.1:49774). Apr 17 02:44:02.916040 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 49774 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:02.922053 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:02.927746 systemd-logind[1585]: New session 15 of user core. Apr 17 02:44:02.941471 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 02:44:03.282869 sshd[4279]: Connection closed by 10.0.0.1 port 49774 Apr 17 02:44:03.286860 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:03.303846 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:49774.service: Deactivated successfully. Apr 17 02:44:03.310783 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 02:44:03.312044 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Apr 17 02:44:03.313887 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:49780.service - OpenSSH per-connection server daemon (10.0.0.1:49780). Apr 17 02:44:03.314656 systemd-logind[1585]: Removed session 15. Apr 17 02:44:03.486291 kubelet[2789]: E0417 02:44:03.486154 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:44:03.509640 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 49780 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:03.511470 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:03.519849 systemd-logind[1585]: New session 16 of user core. Apr 17 02:44:03.534019 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 02:44:03.969578 sshd[4296]: Connection closed by 10.0.0.1 port 49780 Apr 17 02:44:03.971165 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:03.995607 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:49780.service: Deactivated successfully. Apr 17 02:44:04.051219 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 02:44:04.082288 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Apr 17 02:44:04.102405 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:49782.service - OpenSSH per-connection server daemon (10.0.0.1:49782). Apr 17 02:44:04.135201 systemd-logind[1585]: Removed session 16. Apr 17 02:44:04.495060 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 49782 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:04.567467 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:04.616797 systemd-logind[1585]: New session 17 of user core. Apr 17 02:44:04.631082 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 02:44:05.094204 sshd[4311]: Connection closed by 10.0.0.1 port 49782 Apr 17 02:44:05.096377 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:05.124547 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:49782.service: Deactivated successfully. Apr 17 02:44:05.132441 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 02:44:05.134009 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Apr 17 02:44:05.137039 systemd-logind[1585]: Removed session 17. Apr 17 02:44:10.187992 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:56992.service - OpenSSH per-connection server daemon (10.0.0.1:56992). Apr 17 02:44:10.344788 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 56992 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:10.347170 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:10.501670 systemd-logind[1585]: New session 18 of user core. Apr 17 02:44:10.528224 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 02:44:10.845842 sshd[4330]: Connection closed by 10.0.0.1 port 56992 Apr 17 02:44:10.847286 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:10.875636 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:56992.service: Deactivated successfully. Apr 17 02:44:10.887428 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 02:44:10.892606 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Apr 17 02:44:10.894334 systemd-logind[1585]: Removed session 18. Apr 17 02:44:15.924121 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:57162.service - OpenSSH per-connection server daemon (10.0.0.1:57162). Apr 17 02:44:16.213857 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 57162 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:16.215122 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:16.225357 systemd-logind[1585]: New session 19 of user core. Apr 17 02:44:16.241717 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 02:44:16.566138 sshd[4348]: Connection closed by 10.0.0.1 port 57162 Apr 17 02:44:16.566889 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:16.570409 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:57162.service: Deactivated successfully. Apr 17 02:44:16.607879 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 02:44:16.614709 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Apr 17 02:44:16.624034 systemd-logind[1585]: Removed session 19. Apr 17 02:44:17.492163 kubelet[2789]: E0417 02:44:17.492086 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:44:18.496595 kubelet[2789]: E0417 02:44:18.496069 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:44:21.716220 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:53186.service - OpenSSH per-connection server daemon (10.0.0.1:53186). Apr 17 02:44:21.969018 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 53186 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:22.012567 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:22.057920 systemd-logind[1585]: New session 20 of user core. Apr 17 02:44:22.135624 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 02:44:22.738306 sshd[4364]: Connection closed by 10.0.0.1 port 53186 Apr 17 02:44:22.739366 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:22.760130 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:53186.service: Deactivated successfully. Apr 17 02:44:22.772914 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 02:44:22.788839 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Apr 17 02:44:22.895154 systemd-logind[1585]: Removed session 20. Apr 17 02:44:25.508773 kubelet[2789]: E0417 02:44:25.508317 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:44:27.784605 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:53330.service - OpenSSH per-connection server daemon (10.0.0.1:53330). Apr 17 02:44:28.019112 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 53330 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:28.021077 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:28.031463 systemd-logind[1585]: New session 21 of user core. Apr 17 02:44:28.045605 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 02:44:28.512416 sshd[4380]: Connection closed by 10.0.0.1 port 53330 Apr 17 02:44:28.514346 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:28.532511 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:53330.service: Deactivated successfully. Apr 17 02:44:28.537824 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 02:44:28.542384 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Apr 17 02:44:28.549913 systemd-logind[1585]: Removed session 21. Apr 17 02:44:31.131469 update_engine[1588]: I20260417 02:44:31.130872 1588 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 17 02:44:31.131469 update_engine[1588]: I20260417 02:44:31.131443 1588 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 17 02:44:31.132729 update_engine[1588]: I20260417 02:44:31.132522 1588 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 17 02:44:31.133339 update_engine[1588]: I20260417 02:44:31.133283 1588 omaha_request_params.cc:62] Current group set to stable Apr 17 02:44:31.138990 update_engine[1588]: I20260417 02:44:31.136675 1588 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 17 02:44:31.138990 update_engine[1588]: I20260417 02:44:31.138989 1588 update_attempter.cc:643] Scheduling an action processor start. Apr 17 02:44:31.139592 update_engine[1588]: I20260417 02:44:31.139139 1588 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 17 02:44:31.148453 update_engine[1588]: I20260417 02:44:31.146739 1588 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 17 02:44:31.148453 update_engine[1588]: I20260417 02:44:31.147785 1588 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 17 02:44:31.148453 update_engine[1588]: I20260417 02:44:31.147801 1588 omaha_request_action.cc:272] Request: Apr 17 02:44:31.148453 update_engine[1588]: Apr 17 02:44:31.148453 update_engine[1588]: Apr 17 02:44:31.148453 update_engine[1588]: Apr 17 02:44:31.148453 update_engine[1588]: Apr 17 02:44:31.148453 update_engine[1588]: Apr 17 02:44:31.148453 update_engine[1588]: Apr 17 02:44:31.148453 update_engine[1588]: Apr 17 02:44:31.148453 update_engine[1588]: Apr 17 02:44:31.148453 update_engine[1588]: I20260417 02:44:31.147807 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 02:44:31.154217 locksmithd[1645]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 17 02:44:31.163666 update_engine[1588]: I20260417 02:44:31.163515 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 02:44:31.165036 update_engine[1588]: I20260417 02:44:31.164901 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 02:44:31.196239 update_engine[1588]: E20260417 02:44:31.194571 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 02:44:31.252060 update_engine[1588]: I20260417 02:44:31.198769 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 17 02:44:33.631077 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:60946.service - OpenSSH per-connection server daemon (10.0.0.1:60946). Apr 17 02:44:33.835501 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 60946 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:33.837559 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:33.862150 systemd-logind[1585]: New session 22 of user core. Apr 17 02:44:33.889553 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 02:44:34.561467 sshd[4398]: Connection closed by 10.0.0.1 port 60946 Apr 17 02:44:34.562634 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:34.571042 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:60946.service: Deactivated successfully. Apr 17 02:44:34.573123 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 02:44:34.639653 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Apr 17 02:44:34.649672 systemd-logind[1585]: Removed session 22. Apr 17 02:44:39.651315 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:38352.service - OpenSSH per-connection server daemon (10.0.0.1:38352). Apr 17 02:44:40.011920 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 38352 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:40.014634 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:40.029404 systemd-logind[1585]: New session 23 of user core. Apr 17 02:44:40.047379 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 02:44:40.523786 sshd[4416]: Connection closed by 10.0.0.1 port 38352 Apr 17 02:44:40.524428 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:40.559623 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:38352.service: Deactivated successfully. Apr 17 02:44:40.657406 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 02:44:40.660070 systemd-logind[1585]: Session 23 logged out. Waiting for processes to exit. Apr 17 02:44:40.690894 systemd-logind[1585]: Removed session 23. Apr 17 02:44:41.151138 update_engine[1588]: I20260417 02:44:41.150579 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 02:44:41.159757 update_engine[1588]: I20260417 02:44:41.151620 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 02:44:41.160297 update_engine[1588]: I20260417 02:44:41.160056 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 02:44:41.165844 update_engine[1588]: E20260417 02:44:41.165459 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 02:44:41.166844 update_engine[1588]: I20260417 02:44:41.166004 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 17 02:44:45.564989 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:38486.service - OpenSSH per-connection server daemon (10.0.0.1:38486). Apr 17 02:44:45.859539 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 38486 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:45.898907 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:45.953587 systemd-logind[1585]: New session 24 of user core. Apr 17 02:44:45.974117 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 02:44:46.383132 sshd[4432]: Connection closed by 10.0.0.1 port 38486 Apr 17 02:44:46.389588 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:46.454854 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:38486.service: Deactivated successfully. Apr 17 02:44:46.470924 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 02:44:46.492366 systemd-logind[1585]: Session 24 logged out. Waiting for processes to exit. Apr 17 02:44:46.509552 systemd-logind[1585]: Removed session 24. Apr 17 02:44:49.488704 kubelet[2789]: E0417 02:44:49.488363 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:44:51.139922 update_engine[1588]: I20260417 02:44:51.134675 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 02:44:51.141563 update_engine[1588]: I20260417 02:44:51.141371 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 02:44:51.143243 update_engine[1588]: I20260417 02:44:51.142549 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 02:44:51.160351 update_engine[1588]: E20260417 02:44:51.158619 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 02:44:51.161225 update_engine[1588]: I20260417 02:44:51.161062 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 17 02:44:51.461593 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:57924.service - OpenSSH per-connection server daemon (10.0.0.1:57924). Apr 17 02:44:52.048142 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 57924 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:52.064501 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:52.152520 systemd-logind[1585]: New session 25 of user core. Apr 17 02:44:52.181588 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 02:44:52.923629 sshd[4449]: Connection closed by 10.0.0.1 port 57924 Apr 17 02:44:52.942998 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:53.039699 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:57924.service: Deactivated successfully. Apr 17 02:44:53.101908 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 02:44:53.211459 systemd-logind[1585]: Session 25 logged out. Waiting for processes to exit. Apr 17 02:44:53.235017 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:58082.service - OpenSSH per-connection server daemon (10.0.0.1:58082). Apr 17 02:44:53.240077 systemd-logind[1585]: Removed session 25. Apr 17 02:44:53.536358 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 58082 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:53.540714 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:53.548018 systemd-logind[1585]: New session 26 of user core. Apr 17 02:44:53.556253 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 17 02:44:55.715584 sshd[4465]: Connection closed by 10.0.0.1 port 58082 Apr 17 02:44:55.718351 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Apr 17 02:44:55.737921 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:58082.service: Deactivated successfully. Apr 17 02:44:55.743346 systemd[1]: session-26.scope: Deactivated successfully. Apr 17 02:44:55.743560 systemd[1]: session-26.scope: Consumed 1.530s CPU time, 48M memory peak. Apr 17 02:44:55.744782 systemd-logind[1585]: Session 26 logged out. Waiting for processes to exit. Apr 17 02:44:55.763592 systemd[1]: Started sshd@26-10.0.0.6:22-10.0.0.1:58088.service - OpenSSH per-connection server daemon (10.0.0.1:58088). Apr 17 02:44:55.789118 systemd-logind[1585]: Removed session 26. Apr 17 02:44:56.237444 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 58088 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:44:56.242982 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:44:56.369432 systemd-logind[1585]: New session 27 of user core. Apr 17 02:44:56.377371 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 17 02:45:01.140089 update_engine[1588]: I20260417 02:45:01.136814 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 02:45:01.141568 update_engine[1588]: I20260417 02:45:01.140848 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 02:45:01.141990 update_engine[1588]: I20260417 02:45:01.141710 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 02:45:01.161710 update_engine[1588]: E20260417 02:45:01.158643 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 02:45:01.189855 update_engine[1588]: I20260417 02:45:01.166715 1588 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 17 02:45:01.189855 update_engine[1588]: I20260417 02:45:01.170589 1588 omaha_request_action.cc:617] Omaha request response: Apr 17 02:45:01.189855 update_engine[1588]: E20260417 02:45:01.180297 1588 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 17 02:45:01.189855 update_engine[1588]: I20260417 02:45:01.184262 1588 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 17 02:45:01.189855 update_engine[1588]: I20260417 02:45:01.185532 1588 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 02:45:01.189855 update_engine[1588]: I20260417 02:45:01.187597 1588 update_attempter.cc:306] Processing Done. Apr 17 02:45:01.189855 update_engine[1588]: E20260417 02:45:01.189543 1588 update_attempter.cc:619] Update failed. Apr 17 02:45:01.189855 update_engine[1588]: I20260417 02:45:01.189675 1588 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 17 02:45:01.189855 update_engine[1588]: I20260417 02:45:01.189682 1588 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 17 02:45:01.189855 update_engine[1588]: I20260417 02:45:01.189688 1588 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 17 02:45:01.201265 update_engine[1588]: I20260417 02:45:01.193287 1588 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 17 02:45:01.201265 update_engine[1588]: I20260417 02:45:01.196158 1588 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 17 02:45:01.201265 update_engine[1588]: I20260417 02:45:01.196257 1588 omaha_request_action.cc:272] Request: Apr 17 02:45:01.201265 update_engine[1588]: Apr 17 02:45:01.201265 update_engine[1588]: Apr 17 02:45:01.201265 update_engine[1588]: Apr 17 02:45:01.201265 update_engine[1588]: Apr 17 02:45:01.201265 update_engine[1588]: Apr 17 02:45:01.201265 update_engine[1588]: Apr 17 02:45:01.201265 update_engine[1588]: I20260417 02:45:01.196265 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 02:45:01.201265 update_engine[1588]: I20260417 02:45:01.197701 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 02:45:01.228735 update_engine[1588]: I20260417 02:45:01.214900 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 02:45:01.230711 locksmithd[1645]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 17 02:45:01.244838 update_engine[1588]: E20260417 02:45:01.242453 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 02:45:01.244838 update_engine[1588]: I20260417 02:45:01.244597 1588 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 17 02:45:01.244838 update_engine[1588]: I20260417 02:45:01.244789 1588 omaha_request_action.cc:617] Omaha request response: Apr 17 02:45:01.244838 update_engine[1588]: I20260417 02:45:01.244799 1588 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 02:45:01.244838 update_engine[1588]: I20260417 02:45:01.244803 1588 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 02:45:01.244838 update_engine[1588]: I20260417 02:45:01.244807 1588 update_attempter.cc:306] Processing Done. Apr 17 02:45:01.244838 update_engine[1588]: I20260417 02:45:01.244812 1588 update_attempter.cc:310] Error event sent. Apr 17 02:45:01.247443 update_engine[1588]: I20260417 02:45:01.244848 1588 update_check_scheduler.cc:74] Next update check in 41m58s Apr 17 02:45:01.249482 locksmithd[1645]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 17 02:45:01.498656 kubelet[2789]: E0417 02:45:01.498502 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:45:07.581859 kubelet[2789]: E0417 02:45:07.581539 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:45:10.444636 sshd[4480]: Connection closed by 10.0.0.1 port 58088 Apr 17 02:45:10.455714 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Apr 17 02:45:10.554400 systemd[1]: sshd@26-10.0.0.6:22-10.0.0.1:58088.service: Deactivated successfully. Apr 17 02:45:10.594770 systemd[1]: session-27.scope: Deactivated successfully. Apr 17 02:45:10.596388 systemd[1]: session-27.scope: Consumed 8.668s CPU time, 48.8M memory peak. Apr 17 02:45:10.620983 systemd-logind[1585]: Session 27 logged out. Waiting for processes to exit. Apr 17 02:45:10.628638 systemd[1]: Started sshd@27-10.0.0.6:22-10.0.0.1:39388.service - OpenSSH per-connection server daemon (10.0.0.1:39388). Apr 17 02:45:10.632379 systemd-logind[1585]: Removed session 27. Apr 17 02:45:11.469303 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 39388 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:45:11.516503 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:45:11.835551 systemd-logind[1585]: New session 28 of user core. Apr 17 02:45:11.860269 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 17 02:45:13.486331 kubelet[2789]: E0417 02:45:13.486205 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:45:16.367337 sshd[4509]: Connection closed by 10.0.0.1 port 39388 Apr 17 02:45:16.438571 sshd-session[4503]: pam_unix(sshd:session): session closed for user core Apr 17 02:45:16.546669 systemd[1]: sshd@27-10.0.0.6:22-10.0.0.1:39388.service: Deactivated successfully. Apr 17 02:45:16.620329 systemd[1]: session-28.scope: Deactivated successfully. Apr 17 02:45:16.627547 systemd[1]: session-28.scope: Consumed 2.956s CPU time, 24.9M memory peak. Apr 17 02:45:16.649560 systemd-logind[1585]: Session 28 logged out. Waiting for processes to exit. Apr 17 02:45:16.787837 systemd[1]: Started sshd@28-10.0.0.6:22-10.0.0.1:39396.service - OpenSSH per-connection server daemon (10.0.0.1:39396). Apr 17 02:45:16.891723 systemd-logind[1585]: Removed session 28. Apr 17 02:45:17.487299 sshd[4522]: Accepted publickey for core from 10.0.0.1 port 39396 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:45:17.524468 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:45:17.724691 systemd-logind[1585]: New session 29 of user core. Apr 17 02:45:17.760272 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 17 02:45:20.095453 sshd[4525]: Connection closed by 10.0.0.1 port 39396 Apr 17 02:45:20.105674 sshd-session[4522]: pam_unix(sshd:session): session closed for user core Apr 17 02:45:20.346473 systemd[1]: sshd@28-10.0.0.6:22-10.0.0.1:39396.service: Deactivated successfully. Apr 17 02:45:20.447461 systemd[1]: session-29.scope: Deactivated successfully. Apr 17 02:45:20.451701 systemd[1]: session-29.scope: Consumed 1.427s CPU time, 16.6M memory peak. Apr 17 02:45:20.597518 systemd-logind[1585]: Session 29 logged out. Waiting for processes to exit. Apr 17 02:45:20.687877 systemd-logind[1585]: Removed session 29. Apr 17 02:45:25.557707 systemd[1]: Started sshd@29-10.0.0.6:22-10.0.0.1:54120.service - OpenSSH per-connection server daemon (10.0.0.1:54120). Apr 17 02:45:26.370711 kubelet[2789]: E0417 02:45:26.367821 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:45:27.488674 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 54120 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:45:27.598475 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:45:28.118647 systemd-logind[1585]: New session 30 of user core. Apr 17 02:45:28.265828 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 17 02:45:28.628369 kubelet[2789]: E0417 02:45:28.612793 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:45:32.937497 sshd[4543]: Connection closed by 10.0.0.1 port 54120 Apr 17 02:45:32.942596 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Apr 17 02:45:33.159783 systemd[1]: sshd@29-10.0.0.6:22-10.0.0.1:54120.service: Deactivated successfully. Apr 17 02:45:33.272858 systemd[1]: session-30.scope: Deactivated successfully. Apr 17 02:45:33.276258 systemd[1]: session-30.scope: Consumed 2.476s CPU time, 15.8M memory peak. Apr 17 02:45:33.385430 systemd-logind[1585]: Session 30 logged out. Waiting for processes to exit. Apr 17 02:45:33.510147 systemd-logind[1585]: Removed session 30. Apr 17 02:45:38.425578 systemd[1]: Started sshd@30-10.0.0.6:22-10.0.0.1:60752.service - OpenSSH per-connection server daemon (10.0.0.1:60752). Apr 17 02:45:40.810576 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 60752 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:45:40.881465 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:45:41.437823 systemd-logind[1585]: New session 31 of user core. Apr 17 02:45:41.466472 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 17 02:45:43.181326 sshd[4564]: Connection closed by 10.0.0.1 port 60752 Apr 17 02:45:43.186505 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Apr 17 02:45:43.370667 systemd[1]: sshd@30-10.0.0.6:22-10.0.0.1:60752.service: Deactivated successfully. Apr 17 02:45:43.510818 systemd[1]: session-31.scope: Deactivated successfully. Apr 17 02:45:43.641212 systemd-logind[1585]: Session 31 logged out. Waiting for processes to exit. Apr 17 02:45:43.671498 systemd-logind[1585]: Removed session 31. Apr 17 02:45:45.496379 kubelet[2789]: E0417 02:45:45.495696 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:45:46.489726 kubelet[2789]: E0417 02:45:46.488424 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:45:48.549156 systemd[1]: Started sshd@31-10.0.0.6:22-10.0.0.1:44752.service - OpenSSH per-connection server daemon (10.0.0.1:44752). Apr 17 02:45:49.580449 sshd[4580]: Accepted publickey for core from 10.0.0.1 port 44752 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:45:49.659630 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:45:49.988613 systemd-logind[1585]: New session 32 of user core. Apr 17 02:45:50.055707 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 17 02:45:52.360505 sshd[4583]: Connection closed by 10.0.0.1 port 44752 Apr 17 02:45:52.363617 sshd-session[4580]: pam_unix(sshd:session): session closed for user core Apr 17 02:45:52.568768 systemd[1]: sshd@31-10.0.0.6:22-10.0.0.1:44752.service: Deactivated successfully. Apr 17 02:45:52.661471 systemd[1]: session-32.scope: Deactivated successfully. Apr 17 02:45:52.675072 systemd[1]: session-32.scope: Consumed 1.248s CPU time, 15.9M memory peak. Apr 17 02:45:52.789284 systemd-logind[1585]: Session 32 logged out. Waiting for processes to exit. Apr 17 02:45:52.930845 systemd-logind[1585]: Removed session 32. Apr 17 02:45:57.672587 systemd[1]: Started sshd@32-10.0.0.6:22-10.0.0.1:55794.service - OpenSSH per-connection server daemon (10.0.0.1:55794). Apr 17 02:45:58.645270 sshd[4596]: Accepted publickey for core from 10.0.0.1 port 55794 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:45:58.690916 sshd-session[4596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:45:58.965802 systemd-logind[1585]: New session 33 of user core. Apr 17 02:45:59.160339 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 17 02:46:03.119241 sshd[4599]: Connection closed by 10.0.0.1 port 55794 Apr 17 02:46:03.143536 sshd-session[4596]: pam_unix(sshd:session): session closed for user core Apr 17 02:46:03.370676 systemd[1]: sshd@32-10.0.0.6:22-10.0.0.1:55794.service: Deactivated successfully. Apr 17 02:46:03.418918 systemd[1]: session-33.scope: Deactivated successfully. Apr 17 02:46:03.433767 systemd[1]: session-33.scope: Consumed 2.061s CPU time, 14.9M memory peak. Apr 17 02:46:03.565483 systemd-logind[1585]: Session 33 logged out. Waiting for processes to exit. Apr 17 02:46:03.652676 systemd-logind[1585]: Removed session 33. Apr 17 02:46:08.491812 systemd[1]: Started sshd@33-10.0.0.6:22-10.0.0.1:58220.service - OpenSSH per-connection server daemon (10.0.0.1:58220). Apr 17 02:46:09.761632 sshd[4613]: Accepted publickey for core from 10.0.0.1 port 58220 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:46:09.839620 sshd-session[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:46:10.157270 systemd-logind[1585]: New session 34 of user core. Apr 17 02:46:10.243258 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 17 02:46:12.450069 sshd[4618]: Connection closed by 10.0.0.1 port 58220 Apr 17 02:46:12.460814 sshd-session[4613]: pam_unix(sshd:session): session closed for user core Apr 17 02:46:12.691821 systemd[1]: sshd@33-10.0.0.6:22-10.0.0.1:58220.service: Deactivated successfully. Apr 17 02:46:12.788515 systemd[1]: session-34.scope: Deactivated successfully. Apr 17 02:46:12.795730 systemd[1]: session-34.scope: Consumed 1.124s CPU time, 14.8M memory peak. Apr 17 02:46:12.932113 systemd-logind[1585]: Session 34 logged out. Waiting for processes to exit. Apr 17 02:46:13.066912 systemd-logind[1585]: Removed session 34. Apr 17 02:46:14.541095 kubelet[2789]: E0417 02:46:14.539762 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:46:17.638453 systemd[1]: Started sshd@34-10.0.0.6:22-10.0.0.1:37416.service - OpenSSH per-connection server daemon (10.0.0.1:37416). Apr 17 02:46:18.617763 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 37416 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:46:18.662563 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:46:18.980246 systemd-logind[1585]: New session 35 of user core. Apr 17 02:46:19.038542 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 17 02:46:20.717821 sshd[4637]: Connection closed by 10.0.0.1 port 37416 Apr 17 02:46:20.726199 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Apr 17 02:46:20.810001 systemd[1]: sshd@34-10.0.0.6:22-10.0.0.1:37416.service: Deactivated successfully. Apr 17 02:46:20.870879 systemd[1]: session-35.scope: Deactivated successfully. Apr 17 02:46:20.874595 systemd[1]: session-35.scope: Consumed 1.081s CPU time, 15.1M memory peak. Apr 17 02:46:20.966272 systemd-logind[1585]: Session 35 logged out. Waiting for processes to exit. Apr 17 02:46:20.993570 systemd-logind[1585]: Removed session 35. Apr 17 02:46:21.489674 kubelet[2789]: E0417 02:46:21.489084 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:46:22.484842 kubelet[2789]: E0417 02:46:22.484520 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:46:25.491740 kubelet[2789]: E0417 02:46:25.491048 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:46:25.821816 systemd[1]: Started sshd@35-10.0.0.6:22-10.0.0.1:34770.service - OpenSSH per-connection server daemon (10.0.0.1:34770). Apr 17 02:46:26.220049 sshd[4650]: Accepted publickey for core from 10.0.0.1 port 34770 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:46:26.223836 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:46:26.367886 systemd-logind[1585]: New session 36 of user core. Apr 17 02:46:26.474894 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 17 02:46:27.805860 sshd[4653]: Connection closed by 10.0.0.1 port 34770 Apr 17 02:46:27.819734 sshd-session[4650]: pam_unix(sshd:session): session closed for user core Apr 17 02:46:27.945130 systemd[1]: sshd@35-10.0.0.6:22-10.0.0.1:34770.service: Deactivated successfully. Apr 17 02:46:28.072593 systemd[1]: session-36.scope: Deactivated successfully. Apr 17 02:46:28.108662 systemd-logind[1585]: Session 36 logged out. Waiting for processes to exit. Apr 17 02:46:28.120589 systemd-logind[1585]: Removed session 36. Apr 17 02:46:33.062315 systemd[1]: Started sshd@36-10.0.0.6:22-10.0.0.1:51734.service - OpenSSH per-connection server daemon (10.0.0.1:51734). Apr 17 02:46:33.569180 kubelet[2789]: E0417 02:46:33.566818 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:46:34.216239 sshd[4668]: Accepted publickey for core from 10.0.0.1 port 51734 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:46:34.253320 sshd-session[4668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:46:34.583474 systemd-logind[1585]: New session 37 of user core. Apr 17 02:46:34.593918 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 17 02:46:36.456311 sshd[4671]: Connection closed by 10.0.0.1 port 51734 Apr 17 02:46:36.476521 sshd-session[4668]: pam_unix(sshd:session): session closed for user core Apr 17 02:46:36.602476 kubelet[2789]: E0417 02:46:36.602400 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:46:36.680297 systemd[1]: sshd@36-10.0.0.6:22-10.0.0.1:51734.service: Deactivated successfully. Apr 17 02:46:36.792808 systemd[1]: session-37.scope: Deactivated successfully. Apr 17 02:46:36.837843 systemd[1]: session-37.scope: Consumed 1.152s CPU time, 16M memory peak. Apr 17 02:46:36.958413 systemd-logind[1585]: Session 37 logged out. Waiting for processes to exit. Apr 17 02:46:37.045759 systemd-logind[1585]: Removed session 37. Apr 17 02:46:41.673513 systemd[1]: Started sshd@37-10.0.0.6:22-10.0.0.1:39500.service - OpenSSH per-connection server daemon (10.0.0.1:39500). Apr 17 02:46:42.576465 sshd[4686]: Accepted publickey for core from 10.0.0.1 port 39500 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:46:42.604341 sshd-session[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:46:42.877692 systemd-logind[1585]: New session 38 of user core. Apr 17 02:46:42.969073 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 17 02:46:45.034340 sshd[4689]: Connection closed by 10.0.0.1 port 39500 Apr 17 02:46:45.038346 sshd-session[4686]: pam_unix(sshd:session): session closed for user core Apr 17 02:46:45.176141 systemd[1]: sshd@37-10.0.0.6:22-10.0.0.1:39500.service: Deactivated successfully. Apr 17 02:46:45.200901 systemd[1]: session-38.scope: Deactivated successfully. Apr 17 02:46:45.201354 systemd[1]: session-38.scope: Consumed 1.155s CPU time, 15.9M memory peak. Apr 17 02:46:45.242168 systemd-logind[1585]: Session 38 logged out. Waiting for processes to exit. Apr 17 02:46:45.370572 systemd-logind[1585]: Removed session 38. Apr 17 02:46:46.506634 kubelet[2789]: E0417 02:46:46.503722 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:46:50.345784 systemd[1]: Started sshd@38-10.0.0.6:22-10.0.0.1:60330.service - OpenSSH per-connection server daemon (10.0.0.1:60330). Apr 17 02:46:50.994641 sshd[4702]: Accepted publickey for core from 10.0.0.1 port 60330 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:46:51.083527 sshd-session[4702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:46:51.456204 systemd-logind[1585]: New session 39 of user core. Apr 17 02:46:51.480862 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 17 02:46:53.726081 sshd[4705]: Connection closed by 10.0.0.1 port 60330 Apr 17 02:46:53.734292 sshd-session[4702]: pam_unix(sshd:session): session closed for user core Apr 17 02:46:53.934502 systemd[1]: sshd@38-10.0.0.6:22-10.0.0.1:60330.service: Deactivated successfully. Apr 17 02:46:53.968918 systemd[1]: session-39.scope: Deactivated successfully. Apr 17 02:46:53.972642 systemd[1]: session-39.scope: Consumed 1.136s CPU time, 15.6M memory peak. Apr 17 02:46:54.083175 systemd-logind[1585]: Session 39 logged out. Waiting for processes to exit. Apr 17 02:46:54.175925 systemd-logind[1585]: Removed session 39. Apr 17 02:46:54.510421 kubelet[2789]: E0417 02:46:54.508232 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:46:58.944578 systemd[1]: Started sshd@39-10.0.0.6:22-10.0.0.1:60338.service - OpenSSH per-connection server daemon (10.0.0.1:60338). Apr 17 02:46:59.593821 sshd[4720]: Accepted publickey for core from 10.0.0.1 port 60338 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:46:59.650808 sshd-session[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:46:59.996369 systemd-logind[1585]: New session 40 of user core. Apr 17 02:47:00.096349 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 17 02:47:03.030610 sshd[4723]: Connection closed by 10.0.0.1 port 60338 Apr 17 02:47:03.039780 sshd-session[4720]: pam_unix(sshd:session): session closed for user core Apr 17 02:47:03.345595 systemd[1]: sshd@39-10.0.0.6:22-10.0.0.1:60338.service: Deactivated successfully. Apr 17 02:47:03.447332 systemd[1]: session-40.scope: Deactivated successfully. Apr 17 02:47:03.449665 systemd[1]: session-40.scope: Consumed 1.688s CPU time, 15.6M memory peak. Apr 17 02:47:03.577098 systemd-logind[1585]: Session 40 logged out. Waiting for processes to exit. Apr 17 02:47:03.782423 systemd-logind[1585]: Removed session 40. Apr 17 02:47:08.254686 systemd[1]: Started sshd@40-10.0.0.6:22-10.0.0.1:48378.service - OpenSSH per-connection server daemon (10.0.0.1:48378). Apr 17 02:47:09.021887 sshd[4736]: Accepted publickey for core from 10.0.0.1 port 48378 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:47:09.059048 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:47:09.272847 systemd-logind[1585]: New session 41 of user core. Apr 17 02:47:09.370469 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 17 02:47:10.819232 sshd[4739]: Connection closed by 10.0.0.1 port 48378 Apr 17 02:47:10.820612 sshd-session[4736]: pam_unix(sshd:session): session closed for user core Apr 17 02:47:10.848883 systemd[1]: sshd@40-10.0.0.6:22-10.0.0.1:48378.service: Deactivated successfully. Apr 17 02:47:10.963104 systemd[1]: session-41.scope: Deactivated successfully. Apr 17 02:47:10.976408 systemd-logind[1585]: Session 41 logged out. Waiting for processes to exit. Apr 17 02:47:11.020863 systemd-logind[1585]: Removed session 41. Apr 17 02:47:15.971553 systemd[1]: Started sshd@41-10.0.0.6:22-10.0.0.1:50298.service - OpenSSH per-connection server daemon (10.0.0.1:50298). Apr 17 02:47:16.926327 sshd[4756]: Accepted publickey for core from 10.0.0.1 port 50298 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:47:16.947397 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:47:17.166898 systemd-logind[1585]: New session 42 of user core. Apr 17 02:47:17.257690 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 17 02:47:21.247271 sshd[4760]: Connection closed by 10.0.0.1 port 50298 Apr 17 02:47:21.247745 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Apr 17 02:47:21.474276 systemd[1]: sshd@41-10.0.0.6:22-10.0.0.1:50298.service: Deactivated successfully. Apr 17 02:47:21.581916 systemd[1]: session-42.scope: Deactivated successfully. Apr 17 02:47:21.583471 systemd[1]: session-42.scope: Consumed 2.451s CPU time, 14.5M memory peak. Apr 17 02:47:21.638849 systemd-logind[1585]: Session 42 logged out. Waiting for processes to exit. Apr 17 02:47:21.840688 systemd-logind[1585]: Removed session 42. Apr 17 02:47:25.226398 containerd[1607]: time="2026-04-17T02:47:25.222886312Z" level=warning msg="container event discarded" container=38e1479a75e0a5a927edef581feb6fb9cb541bed192aa2e6959d1bde3690ef3f type=CONTAINER_CREATED_EVENT Apr 17 02:47:25.238773 containerd[1607]: time="2026-04-17T02:47:25.230028297Z" level=warning msg="container event discarded" container=38e1479a75e0a5a927edef581feb6fb9cb541bed192aa2e6959d1bde3690ef3f type=CONTAINER_STARTED_EVENT Apr 17 02:47:25.278508 containerd[1607]: time="2026-04-17T02:47:25.276668188Z" level=warning msg="container event discarded" container=92801252f9235da16f99a8dffcda49a73ae475ee16453051ab7fa5b926408a57 type=CONTAINER_CREATED_EVENT Apr 17 02:47:25.280898 containerd[1607]: time="2026-04-17T02:47:25.278787814Z" level=warning msg="container event discarded" container=92801252f9235da16f99a8dffcda49a73ae475ee16453051ab7fa5b926408a57 type=CONTAINER_STARTED_EVENT Apr 17 02:47:25.321103 containerd[1607]: time="2026-04-17T02:47:25.319609731Z" level=warning msg="container event discarded" container=a9381e989684f958dbbf935bfc093a8f16c1f7cd3df37fe78cc039e94c8edbb6 type=CONTAINER_CREATED_EVENT Apr 17 02:47:25.325837 containerd[1607]: time="2026-04-17T02:47:25.322670271Z" level=warning msg="container event discarded" container=dff8ff66f25f2b81d0ee59cfb7c6dd3151b521e38a581068a7d258412d1fb62e type=CONTAINER_CREATED_EVENT Apr 17 02:47:25.325837 containerd[1607]: time="2026-04-17T02:47:25.323925920Z" level=warning msg="container event discarded" container=dff8ff66f25f2b81d0ee59cfb7c6dd3151b521e38a581068a7d258412d1fb62e type=CONTAINER_STARTED_EVENT Apr 17 02:47:25.346137 containerd[1607]: time="2026-04-17T02:47:25.345223789Z" level=warning msg="container event discarded" container=601681eb4035b019617cefcf4f5c418af08a3db7ab3784e1ea16b90515b66b8c type=CONTAINER_CREATED_EVENT Apr 17 02:47:25.360130 containerd[1607]: time="2026-04-17T02:47:25.348714067Z" level=warning msg="container event discarded" container=4d4143450c5a764608cea5a8f807ec0e643a789d2167f5fd03eb4379e74f63d3 type=CONTAINER_CREATED_EVENT Apr 17 02:47:25.538900 containerd[1607]: time="2026-04-17T02:47:25.528841623Z" level=warning msg="container event discarded" container=a9381e989684f958dbbf935bfc093a8f16c1f7cd3df37fe78cc039e94c8edbb6 type=CONTAINER_STARTED_EVENT Apr 17 02:47:25.538900 containerd[1607]: time="2026-04-17T02:47:25.536686049Z" level=warning msg="container event discarded" container=601681eb4035b019617cefcf4f5c418af08a3db7ab3784e1ea16b90515b66b8c type=CONTAINER_STARTED_EVENT Apr 17 02:47:25.594087 containerd[1607]: time="2026-04-17T02:47:25.591500883Z" level=warning msg="container event discarded" container=4d4143450c5a764608cea5a8f807ec0e643a789d2167f5fd03eb4379e74f63d3 type=CONTAINER_STARTED_EVENT Apr 17 02:47:26.475664 systemd[1]: Started sshd@42-10.0.0.6:22-10.0.0.1:41556.service - OpenSSH per-connection server daemon (10.0.0.1:41556). Apr 17 02:47:27.646113 sshd[4773]: Accepted publickey for core from 10.0.0.1 port 41556 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:47:27.677205 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:47:28.004883 systemd-logind[1585]: New session 43 of user core. Apr 17 02:47:28.041925 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 17 02:47:30.377041 sshd[4776]: Connection closed by 10.0.0.1 port 41556 Apr 17 02:47:30.383625 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Apr 17 02:47:30.472846 systemd[1]: sshd@42-10.0.0.6:22-10.0.0.1:41556.service: Deactivated successfully. Apr 17 02:47:30.582757 systemd[1]: session-43.scope: Deactivated successfully. Apr 17 02:47:30.589448 systemd[1]: session-43.scope: Consumed 1.345s CPU time, 16.1M memory peak. Apr 17 02:47:30.638763 systemd-logind[1585]: Session 43 logged out. Waiting for processes to exit. Apr 17 02:47:30.778394 systemd-logind[1585]: Removed session 43. Apr 17 02:47:32.572435 kubelet[2789]: E0417 02:47:32.568879 2789 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.093s" Apr 17 02:47:35.548313 systemd[1]: Started sshd@43-10.0.0.6:22-10.0.0.1:34594.service - OpenSSH per-connection server daemon (10.0.0.1:34594). Apr 17 02:47:35.797052 sshd[4792]: Accepted publickey for core from 10.0.0.1 port 34594 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:47:35.802794 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:47:35.866819 systemd-logind[1585]: New session 44 of user core. Apr 17 02:47:35.892881 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 17 02:47:37.018817 sshd[4795]: Connection closed by 10.0.0.1 port 34594 Apr 17 02:47:37.020699 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Apr 17 02:47:37.060919 systemd[1]: sshd@43-10.0.0.6:22-10.0.0.1:34594.service: Deactivated successfully. Apr 17 02:47:37.123869 systemd[1]: session-44.scope: Deactivated successfully. Apr 17 02:47:37.125631 systemd-logind[1585]: Session 44 logged out. Waiting for processes to exit. Apr 17 02:47:37.175619 systemd[1]: Started sshd@44-10.0.0.6:22-10.0.0.1:34606.service - OpenSSH per-connection server daemon (10.0.0.1:34606). Apr 17 02:47:37.250207 systemd-logind[1585]: Removed session 44. Apr 17 02:47:37.487735 kubelet[2789]: E0417 02:47:37.487605 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:47:38.016840 sshd[4808]: Accepted publickey for core from 10.0.0.1 port 34606 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:47:38.025812 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:47:38.226246 systemd-logind[1585]: New session 45 of user core. Apr 17 02:47:38.245767 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 17 02:47:39.156372 containerd[1607]: time="2026-04-17T02:47:39.155744585Z" level=warning msg="container event discarded" container=f0a6ee64afc8e62d65766f39b01e9b058ec5bc3eb7b53dbde8fc536ff33e56e9 type=CONTAINER_CREATED_EVENT Apr 17 02:47:39.156372 containerd[1607]: time="2026-04-17T02:47:39.156217473Z" level=warning msg="container event discarded" container=f0a6ee64afc8e62d65766f39b01e9b058ec5bc3eb7b53dbde8fc536ff33e56e9 type=CONTAINER_STARTED_EVENT Apr 17 02:47:39.156372 containerd[1607]: time="2026-04-17T02:47:39.156234575Z" level=warning msg="container event discarded" container=e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa type=CONTAINER_CREATED_EVENT Apr 17 02:47:39.156372 containerd[1607]: time="2026-04-17T02:47:39.156242997Z" level=warning msg="container event discarded" container=e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa type=CONTAINER_STARTED_EVENT Apr 17 02:47:39.225254 containerd[1607]: time="2026-04-17T02:47:39.223324943Z" level=warning msg="container event discarded" container=a725c05d3ec75d71cc0689127c48ad6e88828f0816ac802fdc6a76745ab6fa2f type=CONTAINER_CREATED_EVENT Apr 17 02:47:39.260549 containerd[1607]: time="2026-04-17T02:47:39.258653431Z" level=warning msg="container event discarded" container=6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7 type=CONTAINER_CREATED_EVENT Apr 17 02:47:39.260549 containerd[1607]: time="2026-04-17T02:47:39.259609986Z" level=warning msg="container event discarded" container=6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7 type=CONTAINER_STARTED_EVENT Apr 17 02:47:39.353633 containerd[1607]: time="2026-04-17T02:47:39.348845624Z" level=warning msg="container event discarded" container=a725c05d3ec75d71cc0689127c48ad6e88828f0816ac802fdc6a76745ab6fa2f type=CONTAINER_STARTED_EVENT Apr 17 02:47:39.543120 kubelet[2789]: E0417 02:47:39.538542 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:47:39.543120 kubelet[2789]: E0417 02:47:39.541296 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:47:44.526183 kubelet[2789]: E0417 02:47:44.525237 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:47:45.520875 kubelet[2789]: E0417 02:47:45.519278 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:47:49.699280 kubelet[2789]: E0417 02:47:49.697668 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:47:52.360704 containerd[1607]: time="2026-04-17T02:47:52.360439476Z" level=info msg="StopContainer for \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\" with timeout 2 (s)" Apr 17 02:47:52.472620 containerd[1607]: time="2026-04-17T02:47:52.471174402Z" level=info msg="Stop container \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\" with signal terminated" Apr 17 02:47:52.487024 containerd[1607]: time="2026-04-17T02:47:52.486224965Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 02:47:52.840966 systemd-networkd[1490]: lxc_health: Link DOWN Apr 17 02:47:52.840977 systemd-networkd[1490]: lxc_health: Lost carrier Apr 17 02:47:53.095103 systemd[1]: cri-containerd-6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9.scope: Deactivated successfully. Apr 17 02:47:53.143856 systemd[1]: cri-containerd-6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9.scope: Consumed 36.473s CPU time, 123.4M memory peak, 347K read from disk, 13.3M written to disk. Apr 17 02:47:53.223542 containerd[1607]: time="2026-04-17T02:47:53.222496440Z" level=info msg="received container exit event container_id:\"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\" id:\"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\" pid:3460 exited_at:{seconds:1776394073 nanos:168885394}" Apr 17 02:47:53.719253 containerd[1607]: time="2026-04-17T02:47:53.717053893Z" level=info msg="StopContainer for \"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\" with timeout 30 (s)" Apr 17 02:47:53.725234 containerd[1607]: time="2026-04-17T02:47:53.725100326Z" level=info msg="Stop container \"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\" with signal terminated" Apr 17 02:47:54.538523 systemd[1]: cri-containerd-7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7.scope: Deactivated successfully. Apr 17 02:47:54.555842 systemd[1]: cri-containerd-7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7.scope: Consumed 3.875s CPU time, 27.7M memory peak, 4K written to disk. Apr 17 02:47:54.685559 containerd[1607]: time="2026-04-17T02:47:54.681574690Z" level=info msg="received container exit event container_id:\"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\" id:\"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\" pid:3399 exited_at:{seconds:1776394074 nanos:619170559}" Apr 17 02:47:54.751415 kubelet[2789]: E0417 02:47:54.750652 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:47:54.758896 containerd[1607]: time="2026-04-17T02:47:54.751585657Z" level=info msg="Kill container \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\"" Apr 17 02:47:54.790261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9-rootfs.mount: Deactivated successfully. Apr 17 02:47:55.469717 containerd[1607]: time="2026-04-17T02:47:55.469332082Z" level=error msg="collecting metrics for 6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9" error="ttrpc: closed" Apr 17 02:47:55.561747 containerd[1607]: time="2026-04-17T02:47:55.556182838Z" level=info msg="StopContainer for \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\" returns successfully" Apr 17 02:47:55.766509 containerd[1607]: time="2026-04-17T02:47:55.763525745Z" level=info msg="StopPodSandbox for \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\"" Apr 17 02:47:55.975680 containerd[1607]: time="2026-04-17T02:47:55.956881439Z" level=info msg="Container to stop \"3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 02:47:55.975680 containerd[1607]: time="2026-04-17T02:47:55.975636501Z" level=info msg="Container to stop \"9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 02:47:55.975680 containerd[1607]: time="2026-04-17T02:47:55.975665430Z" level=info msg="Container to stop \"349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 02:47:55.975680 containerd[1607]: time="2026-04-17T02:47:55.975677095Z" level=info msg="Container to stop \"de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 02:47:56.176434 containerd[1607]: time="2026-04-17T02:47:56.172484688Z" level=info msg="Container to stop \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 02:47:56.961203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7-rootfs.mount: Deactivated successfully. Apr 17 02:47:57.011814 systemd[1]: cri-containerd-e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa.scope: Deactivated successfully. Apr 17 02:47:57.218308 containerd[1607]: time="2026-04-17T02:47:57.214771454Z" level=info msg="received sandbox exit event container_id:\"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" id:\"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" exit_status:137 exited_at:{seconds:1776394077 nanos:209499613}" monitor_name=podsandbox Apr 17 02:47:57.268787 containerd[1607]: time="2026-04-17T02:47:57.236322930Z" level=info msg="StopContainer for \"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\" returns successfully" Apr 17 02:47:57.392527 containerd[1607]: time="2026-04-17T02:47:57.391887553Z" level=info msg="StopPodSandbox for \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\"" Apr 17 02:47:57.393817 containerd[1607]: time="2026-04-17T02:47:57.393664298Z" level=info msg="Container to stop \"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 02:47:57.923639 systemd[1]: cri-containerd-6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7.scope: Deactivated successfully. Apr 17 02:47:58.105212 containerd[1607]: time="2026-04-17T02:47:58.086865504Z" level=info msg="received sandbox exit event container_id:\"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" id:\"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" exit_status:137 exited_at:{seconds:1776394078 nanos:81809587}" monitor_name=podsandbox Apr 17 02:47:58.267289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa-rootfs.mount: Deactivated successfully. Apr 17 02:47:58.393612 containerd[1607]: time="2026-04-17T02:47:58.392678591Z" level=info msg="shim disconnected" id=e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa namespace=k8s.io Apr 17 02:47:58.393612 containerd[1607]: time="2026-04-17T02:47:58.393602763Z" level=warning msg="cleaning up after shim disconnected" id=e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa namespace=k8s.io Apr 17 02:47:58.393612 containerd[1607]: time="2026-04-17T02:47:58.393620215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 02:47:58.697718 sshd[4811]: Connection closed by 10.0.0.1 port 34606 Apr 17 02:47:58.747745 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Apr 17 02:47:58.767924 containerd[1607]: time="2026-04-17T02:47:58.767548429Z" level=info msg="TearDown network for sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" successfully" Apr 17 02:47:58.767924 containerd[1607]: time="2026-04-17T02:47:58.767650998Z" level=info msg="StopPodSandbox for \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" returns successfully" Apr 17 02:47:58.807237 containerd[1607]: time="2026-04-17T02:47:58.807092795Z" level=info msg="received sandbox container exit event sandbox_id:\"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" exit_status:137 exited_at:{seconds:1776394077 nanos:209499613}" monitor_name=criService Apr 17 02:47:58.873077 containerd[1607]: time="2026-04-17T02:47:58.871149476Z" level=warning msg="container event discarded" container=9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe type=CONTAINER_CREATED_EVENT Apr 17 02:47:58.926760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa-shm.mount: Deactivated successfully. Apr 17 02:47:58.993543 systemd[1]: sshd@44-10.0.0.6:22-10.0.0.1:34606.service: Deactivated successfully. Apr 17 02:47:59.066077 containerd[1607]: time="2026-04-17T02:47:59.046800069Z" level=warning msg="container event discarded" container=9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe type=CONTAINER_STARTED_EVENT Apr 17 02:47:59.077102 systemd[1]: session-45.scope: Deactivated successfully. Apr 17 02:47:59.086481 systemd[1]: session-45.scope: Consumed 4.874s CPU time, 27.2M memory peak. Apr 17 02:47:59.165777 systemd-logind[1585]: Session 45 logged out. Waiting for processes to exit. Apr 17 02:47:59.291145 systemd[1]: Started sshd@45-10.0.0.6:22-10.0.0.1:33530.service - OpenSSH per-connection server daemon (10.0.0.1:33530). Apr 17 02:47:59.337358 systemd-logind[1585]: Removed session 45. Apr 17 02:47:59.344438 containerd[1607]: time="2026-04-17T02:47:59.343307552Z" level=warning msg="container event discarded" container=9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe type=CONTAINER_STOPPED_EVENT Apr 17 02:47:59.484828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7-rootfs.mount: Deactivated successfully. Apr 17 02:47:59.515453 containerd[1607]: time="2026-04-17T02:47:59.515223334Z" level=info msg="shim disconnected" id=6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7 namespace=k8s.io Apr 17 02:47:59.515453 containerd[1607]: time="2026-04-17T02:47:59.515281610Z" level=warning msg="cleaning up after shim disconnected" id=6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7 namespace=k8s.io Apr 17 02:47:59.515453 containerd[1607]: time="2026-04-17T02:47:59.515296973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 02:47:59.687067 containerd[1607]: time="2026-04-17T02:47:59.686440766Z" level=warning msg="container event discarded" container=349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e type=CONTAINER_CREATED_EVENT Apr 17 02:47:59.767653 kubelet[2789]: E0417 02:47:59.767277 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:47:59.855361 containerd[1607]: time="2026-04-17T02:47:59.853815948Z" level=warning msg="container event discarded" container=349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e type=CONTAINER_STARTED_EVENT Apr 17 02:47:59.958204 containerd[1607]: time="2026-04-17T02:47:59.949895879Z" level=warning msg="container event discarded" container=349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e type=CONTAINER_STOPPED_EVENT Apr 17 02:48:00.025886 kubelet[2789]: I0417 02:48:00.023877 2789 scope.go:122] "RemoveContainer" containerID="6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9" Apr 17 02:48:00.077221 containerd[1607]: time="2026-04-17T02:48:00.076680943Z" level=info msg="received sandbox container exit event sandbox_id:\"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" exit_status:137 exited_at:{seconds:1776394078 nanos:81809587}" monitor_name=criService Apr 17 02:48:00.081903 containerd[1607]: time="2026-04-17T02:48:00.081801542Z" level=info msg="TearDown network for sandbox \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" successfully" Apr 17 02:48:00.085035 containerd[1607]: time="2026-04-17T02:48:00.083998428Z" level=info msg="StopPodSandbox for \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" returns successfully" Apr 17 02:48:00.090725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7-shm.mount: Deactivated successfully. Apr 17 02:48:00.137160 containerd[1607]: time="2026-04-17T02:48:00.093072767Z" level=info msg="RemoveContainer for \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\"" Apr 17 02:48:00.138702 kubelet[2789]: I0417 02:48:00.094685 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-config-path\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.138702 kubelet[2789]: I0417 02:48:00.095822 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-xtables-lock\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.138702 kubelet[2789]: I0417 02:48:00.131074 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-xtables-lock" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.138702 kubelet[2789]: I0417 02:48:00.130908 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-kernel\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.138702 kubelet[2789]: I0417 02:48:00.131349 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/bb101225-f8f7-43eb-ae23-bb96e3813c0a-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb101225-f8f7-43eb-ae23-bb96e3813c0a-clustermesh-secrets\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.146533 kubelet[2789]: I0417 02:48:00.131371 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hubble-tls\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.146533 kubelet[2789]: I0417 02:48:00.136435 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-kernel" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.149093 kubelet[2789]: I0417 02:48:00.147414 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-etc-cni-netd\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.161415 kubelet[2789]: I0417 02:48:00.160222 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-lib-modules\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.161415 kubelet[2789]: I0417 02:48:00.160282 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-bpf-maps\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.161415 kubelet[2789]: I0417 02:48:00.160306 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cni-path\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cni-path\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.161415 kubelet[2789]: I0417 02:48:00.160333 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-cgroup\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.161415 kubelet[2789]: I0417 02:48:00.160374 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-run\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.163184 kubelet[2789]: I0417 02:48:00.160426 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hostproc\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hostproc\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.163184 kubelet[2789]: I0417 02:48:00.160466 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-kube-api-access-hv69p\" (UniqueName: \"kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-kube-api-access-hv69p\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.163184 kubelet[2789]: I0417 02:48:00.160499 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-net\") pod \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\" (UID: \"bb101225-f8f7-43eb-ae23-bb96e3813c0a\") " Apr 17 02:48:00.163184 kubelet[2789]: I0417 02:48:00.160574 2789 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.163184 kubelet[2789]: I0417 02:48:00.160587 2789 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.178969 kubelet[2789]: I0417 02:48:00.148440 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-etc-cni-netd" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.178969 kubelet[2789]: I0417 02:48:00.160679 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-net" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.178969 kubelet[2789]: I0417 02:48:00.178424 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-config-path" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 02:48:00.178969 kubelet[2789]: I0417 02:48:00.178472 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cni-path" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.178969 kubelet[2789]: I0417 02:48:00.178684 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-lib-modules" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.179300 kubelet[2789]: I0417 02:48:00.178698 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-bpf-maps" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.179300 kubelet[2789]: I0417 02:48:00.178716 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-run" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.179300 kubelet[2789]: I0417 02:48:00.178725 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-cgroup" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.179300 kubelet[2789]: I0417 02:48:00.178738 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hostproc" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 02:48:00.206061 sshd[4949]: Accepted publickey for core from 10.0.0.1 port 33530 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:48:00.233211 containerd[1607]: time="2026-04-17T02:48:00.229057567Z" level=info msg="RemoveContainer for \"6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9\" returns successfully" Apr 17 02:48:00.268457 kubelet[2789]: I0417 02:48:00.267924 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.274917 kubelet[2789]: I0417 02:48:00.274780 2789 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.275624 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:48:00.283064 kubelet[2789]: I0417 02:48:00.280915 2789 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.283064 kubelet[2789]: I0417 02:48:00.281129 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.283064 kubelet[2789]: I0417 02:48:00.281194 2789 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.283064 kubelet[2789]: I0417 02:48:00.281202 2789 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.283064 kubelet[2789]: I0417 02:48:00.281208 2789 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.283064 kubelet[2789]: I0417 02:48:00.281214 2789 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.283064 kubelet[2789]: I0417 02:48:00.281222 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb101225-f8f7-43eb-ae23-bb96e3813c0a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.283064 kubelet[2789]: I0417 02:48:00.275430 2789 scope.go:122] "RemoveContainer" containerID="de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67" Apr 17 02:48:00.338888 systemd[1]: var-lib-kubelet-pods-bb101225\x2df8f7\x2d43eb\x2dae23\x2dbb96e3813c0a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhv69p.mount: Deactivated successfully. Apr 17 02:48:00.359333 kubelet[2789]: I0417 02:48:00.359154 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hubble-tls" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 02:48:00.363514 kubelet[2789]: I0417 02:48:00.362018 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-kube-api-access-hv69p" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "kube-api-access-hv69p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 02:48:00.365343 systemd[1]: var-lib-kubelet-pods-bb101225\x2df8f7\x2d43eb\x2dae23\x2dbb96e3813c0a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 17 02:48:00.382445 containerd[1607]: time="2026-04-17T02:48:00.381387808Z" level=info msg="RemoveContainer for \"de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67\"" Apr 17 02:48:00.382805 kubelet[2789]: I0417 02:48:00.381897 2789 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.494489 kubelet[2789]: I0417 02:48:00.494214 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hv69p\" (UniqueName: \"kubernetes.io/projected/bb101225-f8f7-43eb-ae23-bb96e3813c0a-kube-api-access-hv69p\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.517552 systemd[1]: var-lib-kubelet-pods-bb101225\x2df8f7\x2d43eb\x2dae23\x2dbb96e3813c0a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 17 02:48:00.569568 kubelet[2789]: I0417 02:48:00.569073 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb101225-f8f7-43eb-ae23-bb96e3813c0a-clustermesh-secrets" pod "bb101225-f8f7-43eb-ae23-bb96e3813c0a" (UID: "bb101225-f8f7-43eb-ae23-bb96e3813c0a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 02:48:00.648597 kubelet[2789]: I0417 02:48:00.645850 2789 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb101225-f8f7-43eb-ae23-bb96e3813c0a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:00.679962 systemd-logind[1585]: New session 46 of user core. Apr 17 02:48:00.684366 containerd[1607]: time="2026-04-17T02:48:00.681439284Z" level=info msg="RemoveContainer for \"de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67\" returns successfully" Apr 17 02:48:00.688246 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 17 02:48:00.733122 containerd[1607]: time="2026-04-17T02:48:00.728220713Z" level=warning msg="container event discarded" container=3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5 type=CONTAINER_CREATED_EVENT Apr 17 02:48:00.739308 kubelet[2789]: I0417 02:48:00.739121 2789 scope.go:122] "RemoveContainer" containerID="3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5" Apr 17 02:48:00.987194 containerd[1607]: time="2026-04-17T02:48:00.984829962Z" level=warning msg="container event discarded" container=3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5 type=CONTAINER_STARTED_EVENT Apr 17 02:48:01.112852 kubelet[2789]: I0417 02:48:01.112716 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9fbd2673-c278-4c6f-97e7-6522b26103b6-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fbd2673-c278-4c6f-97e7-6522b26103b6-cilium-config-path\") pod \"9fbd2673-c278-4c6f-97e7-6522b26103b6\" (UID: \"9fbd2673-c278-4c6f-97e7-6522b26103b6\") " Apr 17 02:48:01.112852 kubelet[2789]: I0417 02:48:01.112820 2789 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9fbd2673-c278-4c6f-97e7-6522b26103b6-kube-api-access-hcqht\" (UniqueName: \"kubernetes.io/projected/9fbd2673-c278-4c6f-97e7-6522b26103b6-kube-api-access-hcqht\") pod \"9fbd2673-c278-4c6f-97e7-6522b26103b6\" (UID: \"9fbd2673-c278-4c6f-97e7-6522b26103b6\") " Apr 17 02:48:01.160864 kubelet[2789]: I0417 02:48:01.113241 2789 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-17T02:48:01Z","lastTransitionTime":"2026-04-17T02:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 17 02:48:01.161002 containerd[1607]: time="2026-04-17T02:48:01.116651362Z" level=warning msg="container event discarded" container=3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5 type=CONTAINER_STOPPED_EVENT Apr 17 02:48:01.161002 containerd[1607]: time="2026-04-17T02:48:01.140376697Z" level=info msg="RemoveContainer for \"3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5\"" Apr 17 02:48:01.243544 containerd[1607]: time="2026-04-17T02:48:01.242716248Z" level=info msg="RemoveContainer for \"3f1791c66a504b08998f85e29c40da3219455b77de9fd6103f6b64e4bc79c3e5\" returns successfully" Apr 17 02:48:01.268343 kubelet[2789]: I0417 02:48:01.266846 2789 scope.go:122] "RemoveContainer" containerID="349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e" Apr 17 02:48:01.275905 systemd[1]: var-lib-kubelet-pods-9fbd2673\x2dc278\x2d4c6f\x2d97e7\x2d6522b26103b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhcqht.mount: Deactivated successfully. Apr 17 02:48:01.420795 kubelet[2789]: I0417 02:48:01.418683 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbd2673-c278-4c6f-97e7-6522b26103b6-kube-api-access-hcqht" pod "9fbd2673-c278-4c6f-97e7-6522b26103b6" (UID: "9fbd2673-c278-4c6f-97e7-6522b26103b6"). InnerVolumeSpecName "kube-api-access-hcqht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 02:48:01.547684 kubelet[2789]: I0417 02:48:01.493237 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hcqht\" (UniqueName: \"kubernetes.io/projected/9fbd2673-c278-4c6f-97e7-6522b26103b6-kube-api-access-hcqht\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:01.563879 kubelet[2789]: I0417 02:48:01.561915 2789 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbd2673-c278-4c6f-97e7-6522b26103b6-cilium-config-path" pod "9fbd2673-c278-4c6f-97e7-6522b26103b6" (UID: "9fbd2673-c278-4c6f-97e7-6522b26103b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 02:48:01.565846 containerd[1607]: time="2026-04-17T02:48:01.565816496Z" level=info msg="RemoveContainer for \"349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e\"" Apr 17 02:48:01.591445 containerd[1607]: time="2026-04-17T02:48:01.591285004Z" level=info msg="RemoveContainer for \"349eaa5b309bb77476c84f4409fef4eaa11c52530cf0406a63a784eb62b67e8e\" returns successfully" Apr 17 02:48:01.592274 kubelet[2789]: I0417 02:48:01.592181 2789 scope.go:122] "RemoveContainer" containerID="9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe" Apr 17 02:48:01.593734 systemd[1]: Removed slice kubepods-besteffort-pod9fbd2673_c278_4c6f_97e7_6522b26103b6.slice - libcontainer container kubepods-besteffort-pod9fbd2673_c278_4c6f_97e7_6522b26103b6.slice. Apr 17 02:48:01.595715 systemd[1]: kubepods-besteffort-pod9fbd2673_c278_4c6f_97e7_6522b26103b6.slice: Consumed 3.915s CPU time, 28M memory peak, 4K written to disk. Apr 17 02:48:01.605917 kubelet[2789]: I0417 02:48:01.596148 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fbd2673-c278-4c6f-97e7-6522b26103b6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 02:48:01.643859 containerd[1607]: time="2026-04-17T02:48:01.638995244Z" level=info msg="RemoveContainer for \"9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe\"" Apr 17 02:48:01.665162 containerd[1607]: time="2026-04-17T02:48:01.664621863Z" level=warning msg="container event discarded" container=7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7 type=CONTAINER_CREATED_EVENT Apr 17 02:48:01.686513 systemd[1]: Removed slice kubepods-burstable-podbb101225_f8f7_43eb_ae23_bb96e3813c0a.slice - libcontainer container kubepods-burstable-podbb101225_f8f7_43eb_ae23_bb96e3813c0a.slice. Apr 17 02:48:01.686723 systemd[1]: kubepods-burstable-podbb101225_f8f7_43eb_ae23_bb96e3813c0a.slice: Consumed 36.707s CPU time, 123.7M memory peak, 359K read from disk, 16.2M written to disk. Apr 17 02:48:01.693656 containerd[1607]: time="2026-04-17T02:48:01.693594226Z" level=info msg="RemoveContainer for \"9462c8e974e06eb986418bcf76189394c80a9a12a68fc27b4036615f5ab721fe\" returns successfully" Apr 17 02:48:01.694263 kubelet[2789]: I0417 02:48:01.694235 2789 scope.go:122] "RemoveContainer" containerID="7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7" Apr 17 02:48:01.781884 containerd[1607]: time="2026-04-17T02:48:01.780547331Z" level=warning msg="container event discarded" container=de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67 type=CONTAINER_CREATED_EVENT Apr 17 02:48:01.789494 containerd[1607]: time="2026-04-17T02:48:01.789329325Z" level=info msg="RemoveContainer for \"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\"" Apr 17 02:48:01.849279 containerd[1607]: time="2026-04-17T02:48:01.848194794Z" level=info msg="RemoveContainer for \"7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7\" returns successfully" Apr 17 02:48:01.956025 containerd[1607]: time="2026-04-17T02:48:01.955796650Z" level=warning msg="container event discarded" container=7886d0714bb2af0838e807bde74f0a82d31c80a4d10e3879a1d7a076d9224eb7 type=CONTAINER_STARTED_EVENT Apr 17 02:48:01.986198 containerd[1607]: time="2026-04-17T02:48:01.978769054Z" level=warning msg="container event discarded" container=de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67 type=CONTAINER_STARTED_EVENT Apr 17 02:48:02.258839 containerd[1607]: time="2026-04-17T02:48:02.257609986Z" level=warning msg="container event discarded" container=de1b4b4269d9f031f6de035289a3e0872b2adcfe13d08526794ba9b9c7f65a67 type=CONTAINER_STOPPED_EVENT Apr 17 02:48:02.957018 containerd[1607]: time="2026-04-17T02:48:02.954744350Z" level=warning msg="container event discarded" container=6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9 type=CONTAINER_CREATED_EVENT Apr 17 02:48:03.561086 kubelet[2789]: I0417 02:48:03.560855 2789 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9fbd2673-c278-4c6f-97e7-6522b26103b6" path="/var/lib/kubelet/pods/9fbd2673-c278-4c6f-97e7-6522b26103b6/volumes" Apr 17 02:48:03.576378 kubelet[2789]: I0417 02:48:03.561543 2789 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bb101225-f8f7-43eb-ae23-bb96e3813c0a" path="/var/lib/kubelet/pods/bb101225-f8f7-43eb-ae23-bb96e3813c0a/volumes" Apr 17 02:48:03.638030 containerd[1607]: time="2026-04-17T02:48:03.632476608Z" level=warning msg="container event discarded" container=6ba9d344f6929060a2e19cccedd03caba2c350d84f867b50631bed1e98c0bea9 type=CONTAINER_STARTED_EVENT Apr 17 02:48:04.815088 kubelet[2789]: E0417 02:48:04.814700 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:09.576764 sshd[4969]: Connection closed by 10.0.0.1 port 33530 Apr 17 02:48:09.607510 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Apr 17 02:48:09.892702 systemd[1]: Started sshd@46-10.0.0.6:22-10.0.0.1:42534.service - OpenSSH per-connection server daemon (10.0.0.1:42534). Apr 17 02:48:10.020262 kubelet[2789]: E0417 02:48:10.019891 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:10.050370 systemd[1]: sshd@45-10.0.0.6:22-10.0.0.1:33530.service: Deactivated successfully. Apr 17 02:48:10.153803 systemd[1]: session-46.scope: Deactivated successfully. Apr 17 02:48:10.165643 systemd[1]: session-46.scope: Consumed 3.159s CPU time, 25.3M memory peak. Apr 17 02:48:10.293339 systemd-logind[1585]: Session 46 logged out. Waiting for processes to exit. Apr 17 02:48:10.510761 kubelet[2789]: E0417 02:48:10.500362 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:10.674368 systemd-logind[1585]: Removed session 46. Apr 17 02:48:11.455152 sshd[4981]: Accepted publickey for core from 10.0.0.1 port 42534 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:48:11.526337 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:48:12.182927 systemd-logind[1585]: New session 47 of user core. Apr 17 02:48:12.237066 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 17 02:48:12.495375 kubelet[2789]: E0417 02:48:12.487100 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:12.686349 sshd[4987]: Connection closed by 10.0.0.1 port 42534 Apr 17 02:48:12.757905 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Apr 17 02:48:13.003467 systemd[1]: Started sshd@47-10.0.0.6:22-10.0.0.1:42546.service - OpenSSH per-connection server daemon (10.0.0.1:42546). Apr 17 02:48:13.019383 systemd[1]: sshd@46-10.0.0.6:22-10.0.0.1:42534.service: Deactivated successfully. Apr 17 02:48:13.048897 systemd[1]: session-47.scope: Deactivated successfully. Apr 17 02:48:13.174720 systemd-logind[1585]: Session 47 logged out. Waiting for processes to exit. Apr 17 02:48:13.250220 systemd-logind[1585]: Removed session 47. Apr 17 02:48:14.462776 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 42546 ssh2: RSA SHA256:yqnVCC+lEcHdwnn6mNvEhGe+a0efMsSNm9ThpfG/FVI Apr 17 02:48:14.551837 kubelet[2789]: E0417 02:48:14.540892 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:14.633357 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:48:15.144253 kubelet[2789]: E0417 02:48:15.142563 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:15.195480 systemd-logind[1585]: New session 48 of user core. Apr 17 02:48:15.326916 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 17 02:48:15.853382 kubelet[2789]: I0417 02:48:15.834880 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-cilium-cgroup\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.864662 kubelet[2789]: I0417 02:48:15.862715 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-xtables-lock\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.864662 kubelet[2789]: I0417 02:48:15.862817 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-etc-cni-netd\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.864662 kubelet[2789]: I0417 02:48:15.862842 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-lib-modules\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.864662 kubelet[2789]: I0417 02:48:15.862860 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98fa5264-fea9-42a7-964f-5f6618f399f9-clustermesh-secrets\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.864662 kubelet[2789]: I0417 02:48:15.862883 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-hostproc\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.864662 kubelet[2789]: I0417 02:48:15.862897 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98fa5264-fea9-42a7-964f-5f6618f399f9-cilium-config-path\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.880801 kubelet[2789]: I0417 02:48:15.874236 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-cni-path\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.882428 kubelet[2789]: I0417 02:48:15.881852 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-cilium-run\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.886737 kubelet[2789]: I0417 02:48:15.885863 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/98fa5264-fea9-42a7-964f-5f6618f399f9-cilium-ipsec-secrets\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.895810 kubelet[2789]: I0417 02:48:15.893334 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-host-proc-sys-net\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:15.933623 kubelet[2789]: I0417 02:48:15.933069 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-host-proc-sys-kernel\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:16.008554 kubelet[2789]: I0417 02:48:16.005869 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98fa5264-fea9-42a7-964f-5f6618f399f9-hubble-tls\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:16.021505 kubelet[2789]: I0417 02:48:16.018926 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98fa5264-fea9-42a7-964f-5f6618f399f9-bpf-maps\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:16.073161 kubelet[2789]: I0417 02:48:16.070260 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xckh\" (UniqueName: \"kubernetes.io/projected/98fa5264-fea9-42a7-964f-5f6618f399f9-kube-api-access-9xckh\") pod \"cilium-m9tls\" (UID: \"98fa5264-fea9-42a7-964f-5f6618f399f9\") " pod="kube-system/cilium-m9tls" Apr 17 02:48:16.441666 systemd[1]: Created slice kubepods-burstable-pod98fa5264_fea9_42a7_964f_5f6618f399f9.slice - libcontainer container kubepods-burstable-pod98fa5264_fea9_42a7_964f_5f6618f399f9.slice. Apr 17 02:48:16.586396 kubelet[2789]: E0417 02:48:16.586094 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:16.846505 containerd[1607]: time="2026-04-17T02:48:16.833251755Z" level=warning msg="container event discarded" container=c77349072cfd04e6822fb9e9b1e24033214dd92ec9c47edcc3185866ffb7b1bd type=CONTAINER_CREATED_EVENT Apr 17 02:48:16.846505 containerd[1607]: time="2026-04-17T02:48:16.834309551Z" level=warning msg="container event discarded" container=c77349072cfd04e6822fb9e9b1e24033214dd92ec9c47edcc3185866ffb7b1bd type=CONTAINER_STARTED_EVENT Apr 17 02:48:16.846505 containerd[1607]: time="2026-04-17T02:48:16.834377622Z" level=warning msg="container event discarded" container=9beaaa8becb509d905b688940fbd02653bd90a913b2c7ebe6aa1b9fde45efcf9 type=CONTAINER_CREATED_EVENT Apr 17 02:48:16.846505 containerd[1607]: time="2026-04-17T02:48:16.834385972Z" level=warning msg="container event discarded" container=9beaaa8becb509d905b688940fbd02653bd90a913b2c7ebe6aa1b9fde45efcf9 type=CONTAINER_STARTED_EVENT Apr 17 02:48:16.991893 containerd[1607]: time="2026-04-17T02:48:16.982597177Z" level=warning msg="container event discarded" container=e5707c3d8568a549e739629456d7d6cb8aeb0a1dc92e216110c544606d359ac9 type=CONTAINER_CREATED_EVENT Apr 17 02:48:17.026327 containerd[1607]: time="2026-04-17T02:48:17.022254200Z" level=warning msg="container event discarded" container=4ef87bad020130b2adb574cabc53813864be11724462ce433ffed7fc9b15316a type=CONTAINER_CREATED_EVENT Apr 17 02:48:17.163163 containerd[1607]: time="2026-04-17T02:48:17.150415224Z" level=warning msg="container event discarded" container=e5707c3d8568a549e739629456d7d6cb8aeb0a1dc92e216110c544606d359ac9 type=CONTAINER_STARTED_EVENT Apr 17 02:48:17.233120 containerd[1607]: time="2026-04-17T02:48:17.230579321Z" level=warning msg="container event discarded" container=4ef87bad020130b2adb574cabc53813864be11724462ce433ffed7fc9b15316a type=CONTAINER_STARTED_EVENT Apr 17 02:48:17.254038 kubelet[2789]: E0417 02:48:17.253130 2789 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 17 02:48:17.254038 kubelet[2789]: E0417 02:48:17.253366 2789 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Apr 17 02:48:17.254038 kubelet[2789]: E0417 02:48:17.253115 2789 projected.go:266] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Apr 17 02:48:17.254038 kubelet[2789]: E0417 02:48:17.254032 2789 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-m9tls: failed to sync secret cache: timed out waiting for the condition Apr 17 02:48:17.254038 kubelet[2789]: E0417 02:48:17.253266 2789 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 17 02:48:17.296331 kubelet[2789]: E0417 02:48:17.254030 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98fa5264-fea9-42a7-964f-5f6618f399f9-cilium-ipsec-secrets podName:98fa5264-fea9-42a7-964f-5f6618f399f9 nodeName:}" failed. No retries permitted until 2026-04-17 02:48:17.753918931 +0000 UTC m=+346.483472792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/98fa5264-fea9-42a7-964f-5f6618f399f9-cilium-ipsec-secrets") pod "cilium-m9tls" (UID: "98fa5264-fea9-42a7-964f-5f6618f399f9") : failed to sync secret cache: timed out waiting for the condition Apr 17 02:48:17.296331 kubelet[2789]: E0417 02:48:17.254189 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98fa5264-fea9-42a7-964f-5f6618f399f9-clustermesh-secrets podName:98fa5264-fea9-42a7-964f-5f6618f399f9 nodeName:}" failed. No retries permitted until 2026-04-17 02:48:17.754168742 +0000 UTC m=+346.483722596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/98fa5264-fea9-42a7-964f-5f6618f399f9-clustermesh-secrets") pod "cilium-m9tls" (UID: "98fa5264-fea9-42a7-964f-5f6618f399f9") : failed to sync secret cache: timed out waiting for the condition Apr 17 02:48:17.296331 kubelet[2789]: E0417 02:48:17.254203 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98fa5264-fea9-42a7-964f-5f6618f399f9-hubble-tls podName:98fa5264-fea9-42a7-964f-5f6618f399f9 nodeName:}" failed. No retries permitted until 2026-04-17 02:48:17.754197175 +0000 UTC m=+346.483751027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/98fa5264-fea9-42a7-964f-5f6618f399f9-hubble-tls") pod "cilium-m9tls" (UID: "98fa5264-fea9-42a7-964f-5f6618f399f9") : failed to sync secret cache: timed out waiting for the condition Apr 17 02:48:17.367542 kubelet[2789]: E0417 02:48:17.254225 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98fa5264-fea9-42a7-964f-5f6618f399f9-cilium-config-path podName:98fa5264-fea9-42a7-964f-5f6618f399f9 nodeName:}" failed. No retries permitted until 2026-04-17 02:48:17.75421911 +0000 UTC m=+346.483772963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/98fa5264-fea9-42a7-964f-5f6618f399f9-cilium-config-path") pod "cilium-m9tls" (UID: "98fa5264-fea9-42a7-964f-5f6618f399f9") : failed to sync configmap cache: timed out waiting for the condition Apr 17 02:48:18.588245 kubelet[2789]: E0417 02:48:18.558576 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:19.977830 kubelet[2789]: E0417 02:48:19.972512 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:20.039419 containerd[1607]: time="2026-04-17T02:48:20.038795176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m9tls,Uid:98fa5264-fea9-42a7-964f-5f6618f399f9,Namespace:kube-system,Attempt:0,}" Apr 17 02:48:20.286026 kubelet[2789]: E0417 02:48:20.267355 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:20.601485 kubelet[2789]: E0417 02:48:20.582899 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:20.827508 containerd[1607]: time="2026-04-17T02:48:20.827154710Z" level=info msg="connecting to shim 63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118" address="unix:///run/containerd/s/9674208109994101a05bb33fd824aa0ba274d3a3c1d015c9660569062e59db51" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:48:22.135816 systemd[1]: Started cri-containerd-63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118.scope - libcontainer container 63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118. Apr 17 02:48:22.490681 kubelet[2789]: E0417 02:48:22.490325 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:24.550202 kubelet[2789]: E0417 02:48:24.499218 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:24.798506 containerd[1607]: time="2026-04-17T02:48:24.796403882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m9tls,Uid:98fa5264-fea9-42a7-964f-5f6618f399f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\"" Apr 17 02:48:24.990555 kubelet[2789]: E0417 02:48:24.987176 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:25.323764 containerd[1607]: time="2026-04-17T02:48:25.318429760Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 02:48:25.332279 kubelet[2789]: E0417 02:48:25.327381 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:25.485392 containerd[1607]: time="2026-04-17T02:48:25.484170834Z" level=info msg="Container a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:48:25.707324 containerd[1607]: time="2026-04-17T02:48:25.688911012Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c\"" Apr 17 02:48:25.763847 containerd[1607]: time="2026-04-17T02:48:25.760575167Z" level=info msg="StartContainer for \"a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c\"" Apr 17 02:48:25.912873 containerd[1607]: time="2026-04-17T02:48:25.912403559Z" level=info msg="connecting to shim a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c" address="unix:///run/containerd/s/9674208109994101a05bb33fd824aa0ba274d3a3c1d015c9660569062e59db51" protocol=ttrpc version=3 Apr 17 02:48:26.487137 kubelet[2789]: E0417 02:48:26.485612 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:27.098202 systemd[1]: Started cri-containerd-a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c.scope - libcontainer container a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c. Apr 17 02:48:28.489850 kubelet[2789]: E0417 02:48:28.488365 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:29.357230 containerd[1607]: time="2026-04-17T02:48:29.356913915Z" level=info msg="StartContainer for \"a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c\" returns successfully" Apr 17 02:48:29.547510 systemd[1]: cri-containerd-a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c.scope: Deactivated successfully. Apr 17 02:48:29.949541 containerd[1607]: time="2026-04-17T02:48:29.948448601Z" level=info msg="received container exit event container_id:\"a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c\" id:\"a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c\" pid:5067 exited_at:{seconds:1776394109 nanos:774365361}" Apr 17 02:48:30.056316 kubelet[2789]: E0417 02:48:30.055516 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:30.426288 kubelet[2789]: E0417 02:48:30.426158 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:30.650486 kubelet[2789]: E0417 02:48:30.646203 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:31.145214 kubelet[2789]: E0417 02:48:31.144170 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:31.994356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3b3b12053104223080ae38fc0769b8b294496158ffa0708c5f677aee1e1721c-rootfs.mount: Deactivated successfully. Apr 17 02:48:32.604432 kubelet[2789]: E0417 02:48:32.603795 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:32.638253 kubelet[2789]: E0417 02:48:32.637518 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:32.980593 containerd[1607]: time="2026-04-17T02:48:32.980352474Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 02:48:33.241715 containerd[1607]: time="2026-04-17T02:48:33.236619273Z" level=info msg="StopPodSandbox for \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\"" Apr 17 02:48:33.243615 containerd[1607]: time="2026-04-17T02:48:33.241280026Z" level=info msg="TearDown network for sandbox \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" successfully" Apr 17 02:48:33.243615 containerd[1607]: time="2026-04-17T02:48:33.241888331Z" level=info msg="StopPodSandbox for \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" returns successfully" Apr 17 02:48:33.262605 containerd[1607]: time="2026-04-17T02:48:33.261512120Z" level=info msg="RemovePodSandbox for \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\"" Apr 17 02:48:33.262605 containerd[1607]: time="2026-04-17T02:48:33.261653956Z" level=info msg="Forcibly stopping sandbox \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\"" Apr 17 02:48:33.277877 containerd[1607]: time="2026-04-17T02:48:33.273869441Z" level=info msg="TearDown network for sandbox \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" successfully" Apr 17 02:48:33.384200 containerd[1607]: time="2026-04-17T02:48:33.376136587Z" level=info msg="Container b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:48:33.473853 containerd[1607]: time="2026-04-17T02:48:33.473759886Z" level=info msg="Ensure that sandbox 6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7 in task-service has been cleanup successfully" Apr 17 02:48:33.595156 containerd[1607]: time="2026-04-17T02:48:33.586145779Z" level=info msg="RemovePodSandbox \"6ca1a43b2b47b80497a67af08946dd596fdd11ff7ece0a044eb18ad32881fdd7\" returns successfully" Apr 17 02:48:33.677232 containerd[1607]: time="2026-04-17T02:48:33.674896060Z" level=info msg="StopPodSandbox for \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\"" Apr 17 02:48:33.722478 containerd[1607]: time="2026-04-17T02:48:33.721505324Z" level=info msg="TearDown network for sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" successfully" Apr 17 02:48:33.727976 containerd[1607]: time="2026-04-17T02:48:33.727458858Z" level=info msg="StopPodSandbox for \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" returns successfully" Apr 17 02:48:33.740573 containerd[1607]: time="2026-04-17T02:48:33.731635775Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5\"" Apr 17 02:48:33.746371 containerd[1607]: time="2026-04-17T02:48:33.744335933Z" level=info msg="RemovePodSandbox for \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\"" Apr 17 02:48:33.775736 containerd[1607]: time="2026-04-17T02:48:33.775479197Z" level=info msg="Forcibly stopping sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\"" Apr 17 02:48:33.788825 containerd[1607]: time="2026-04-17T02:48:33.788482420Z" level=info msg="TearDown network for sandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" successfully" Apr 17 02:48:33.864705 containerd[1607]: time="2026-04-17T02:48:33.852837081Z" level=info msg="StartContainer for \"b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5\"" Apr 17 02:48:33.949064 containerd[1607]: time="2026-04-17T02:48:33.946528947Z" level=info msg="Ensure that sandbox e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa in task-service has been cleanup successfully" Apr 17 02:48:33.959473 containerd[1607]: time="2026-04-17T02:48:33.957239782Z" level=info msg="connecting to shim b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5" address="unix:///run/containerd/s/9674208109994101a05bb33fd824aa0ba274d3a3c1d015c9660569062e59db51" protocol=ttrpc version=3 Apr 17 02:48:34.073536 containerd[1607]: time="2026-04-17T02:48:34.072135782Z" level=info msg="RemovePodSandbox \"e000ff96bac755b6ebc30e17816cae458c1ab3904c43a728047e60244dc9f7fa\" returns successfully" Apr 17 02:48:34.486277 kubelet[2789]: E0417 02:48:34.483594 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:34.967571 systemd[1]: Started cri-containerd-b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5.scope - libcontainer container b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5. Apr 17 02:48:35.483243 kubelet[2789]: E0417 02:48:35.482438 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:36.494239 kubelet[2789]: E0417 02:48:36.491080 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:36.501101 containerd[1607]: time="2026-04-17T02:48:36.496356526Z" level=info msg="StartContainer for \"b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5\" returns successfully" Apr 17 02:48:36.565593 systemd[1]: cri-containerd-b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5.scope: Deactivated successfully. Apr 17 02:48:36.685246 containerd[1607]: time="2026-04-17T02:48:36.684668415Z" level=info msg="received container exit event container_id:\"b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5\" id:\"b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5\" pid:5118 exited_at:{seconds:1776394116 nanos:676727729}" Apr 17 02:48:37.543805 kubelet[2789]: E0417 02:48:37.543336 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:37.714673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b34a74e99c3fdb0727c2be42a9d6ea446168d4dd5963d1a2d0c1c2666f89f9f5-rootfs.mount: Deactivated successfully. Apr 17 02:48:38.494439 kubelet[2789]: E0417 02:48:38.492679 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:38.668262 kubelet[2789]: E0417 02:48:38.662518 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:39.084487 containerd[1607]: time="2026-04-17T02:48:39.084060228Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 02:48:39.481413 containerd[1607]: time="2026-04-17T02:48:39.394440370Z" level=info msg="Container 05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:48:39.578987 containerd[1607]: time="2026-04-17T02:48:39.573439808Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4\"" Apr 17 02:48:39.648749 containerd[1607]: time="2026-04-17T02:48:39.647750336Z" level=info msg="StartContainer for \"05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4\"" Apr 17 02:48:39.742578 containerd[1607]: time="2026-04-17T02:48:39.740445074Z" level=info msg="connecting to shim 05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4" address="unix:///run/containerd/s/9674208109994101a05bb33fd824aa0ba274d3a3c1d015c9660569062e59db51" protocol=ttrpc version=3 Apr 17 02:48:40.565821 kubelet[2789]: E0417 02:48:40.562469 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:40.589017 kubelet[2789]: E0417 02:48:40.588665 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:41.391875 systemd[1]: Started cri-containerd-05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4.scope - libcontainer container 05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4. Apr 17 02:48:42.497292 kubelet[2789]: E0417 02:48:42.496218 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:42.533306 kubelet[2789]: E0417 02:48:42.500603 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:48:43.669138 containerd[1607]: time="2026-04-17T02:48:43.668662977Z" level=error msg="get state for 05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4" error="context deadline exceeded" Apr 17 02:48:43.669138 containerd[1607]: time="2026-04-17T02:48:43.669087535Z" level=warning msg="unknown status" status=0 Apr 17 02:48:43.852265 containerd[1607]: time="2026-04-17T02:48:43.850269072Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 17 02:48:44.518494 kubelet[2789]: E0417 02:48:44.517837 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:48:44.537908 kubelet[2789]: E0417 02:48:44.535732 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:44.926674 containerd[1607]: time="2026-04-17T02:48:44.926133017Z" level=info msg="StartContainer for \"05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4\" returns successfully" Apr 17 02:48:44.964048 systemd[1]: cri-containerd-05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4.scope: Deactivated successfully. Apr 17 02:48:45.087814 containerd[1607]: time="2026-04-17T02:48:45.087506839Z" level=info msg="received container exit event container_id:\"05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4\" id:\"05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4\" pid:5163 exited_at:{seconds:1776394125 nanos:58492585}" Apr 17 02:48:45.598483 kubelet[2789]: E0417 02:48:45.595781 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:46.135288 kubelet[2789]: E0417 02:48:46.133574 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:46.465567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05e3a7ff5d0e901549e05461cf68581ee7d5e44ec4c51fc8e14c61469f3611d4-rootfs.mount: Deactivated successfully. Apr 17 02:48:46.529637 kubelet[2789]: E0417 02:48:46.497549 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:48:46.535767 kubelet[2789]: E0417 02:48:46.499576 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:47.347039 kubelet[2789]: E0417 02:48:47.345431 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:47.468614 containerd[1607]: time="2026-04-17T02:48:47.464418885Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 02:48:48.112275 containerd[1607]: time="2026-04-17T02:48:48.111142007Z" level=info msg="Container 225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:48:48.124795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849995045.mount: Deactivated successfully. Apr 17 02:48:48.223167 containerd[1607]: time="2026-04-17T02:48:48.221900495Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2\"" Apr 17 02:48:48.277646 containerd[1607]: time="2026-04-17T02:48:48.277338978Z" level=info msg="StartContainer for \"225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2\"" Apr 17 02:48:48.279821 containerd[1607]: time="2026-04-17T02:48:48.279618272Z" level=info msg="connecting to shim 225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2" address="unix:///run/containerd/s/9674208109994101a05bb33fd824aa0ba274d3a3c1d015c9660569062e59db51" protocol=ttrpc version=3 Apr 17 02:48:48.543979 kubelet[2789]: E0417 02:48:48.543554 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:48:48.552097 kubelet[2789]: E0417 02:48:48.551053 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:49.286859 systemd[1]: Started cri-containerd-225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2.scope - libcontainer container 225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2. Apr 17 02:48:50.052001 kubelet[2789]: E0417 02:48:50.050904 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:50.065265 kubelet[2789]: E0417 02:48:50.064581 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:48:50.700353 kubelet[2789]: E0417 02:48:50.698752 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:51.478682 systemd[1]: cri-containerd-225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2.scope: Deactivated successfully. Apr 17 02:48:51.553526 kubelet[2789]: E0417 02:48:51.553146 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:51.568136 kubelet[2789]: E0417 02:48:51.567191 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:48:51.830250 containerd[1607]: time="2026-04-17T02:48:51.824854785Z" level=info msg="received container exit event container_id:\"225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2\" id:\"225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2\" pid:5204 exited_at:{seconds:1776394131 nanos:759536779}" Apr 17 02:48:52.554643 kubelet[2789]: E0417 02:48:52.554345 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:52.569866 containerd[1607]: time="2026-04-17T02:48:52.561663579Z" level=info msg="StartContainer for \"225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2\" returns successfully" Apr 17 02:48:53.376461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-225bccf189d79ea1f8c2b89653800603457eea2f71b027f22c389d7dce6932c2-rootfs.mount: Deactivated successfully. Apr 17 02:48:53.724334 kubelet[2789]: E0417 02:48:53.702821 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:53.792907 kubelet[2789]: E0417 02:48:53.792564 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:48:54.281267 kubelet[2789]: E0417 02:48:54.281070 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:54.461390 containerd[1607]: time="2026-04-17T02:48:54.460785118Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 02:48:54.599071 kubelet[2789]: E0417 02:48:54.577269 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:54.655158 kubelet[2789]: E0417 02:48:54.651617 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:48:54.880378 containerd[1607]: time="2026-04-17T02:48:54.874888392Z" level=info msg="Container e884cf1a4e17b39b4f2557027df1a33f3ff383ff8ff38274cfcf1f53b2a42df2: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:48:55.047397 containerd[1607]: time="2026-04-17T02:48:55.046685863Z" level=info msg="CreateContainer within sandbox \"63d5814a753877ba57c1787188e025b6056669b963e58b19e2ed89063df8c118\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e884cf1a4e17b39b4f2557027df1a33f3ff383ff8ff38274cfcf1f53b2a42df2\"" Apr 17 02:48:55.067342 containerd[1607]: time="2026-04-17T02:48:55.065461227Z" level=info msg="StartContainer for \"e884cf1a4e17b39b4f2557027df1a33f3ff383ff8ff38274cfcf1f53b2a42df2\"" Apr 17 02:48:55.228009 containerd[1607]: time="2026-04-17T02:48:55.225918497Z" level=info msg="connecting to shim e884cf1a4e17b39b4f2557027df1a33f3ff383ff8ff38274cfcf1f53b2a42df2" address="unix:///run/containerd/s/9674208109994101a05bb33fd824aa0ba274d3a3c1d015c9660569062e59db51" protocol=ttrpc version=3 Apr 17 02:48:56.011336 kubelet[2789]: E0417 02:48:55.954900 2789 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 02:48:56.491446 systemd[1]: Started cri-containerd-e884cf1a4e17b39b4f2557027df1a33f3ff383ff8ff38274cfcf1f53b2a42df2.scope - libcontainer container e884cf1a4e17b39b4f2557027df1a33f3ff383ff8ff38274cfcf1f53b2a42df2. Apr 17 02:48:56.703527 kubelet[2789]: E0417 02:48:56.702511 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:48:56.710355 kubelet[2789]: E0417 02:48:56.708983 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:57.861006 containerd[1607]: time="2026-04-17T02:48:57.856639283Z" level=info msg="StartContainer for \"e884cf1a4e17b39b4f2557027df1a33f3ff383ff8ff38274cfcf1f53b2a42df2\" returns successfully" Apr 17 02:48:58.539568 kubelet[2789]: E0417 02:48:58.539323 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:48:58.540532 kubelet[2789]: E0417 02:48:58.540471 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:48:59.633355 kubelet[2789]: E0417 02:48:59.632742 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-bxqjr" podUID="f6941710-2162-4edf-9e32-19e4628f8551" Apr 17 02:49:00.486450 kubelet[2789]: E0417 02:49:00.485924 2789 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-gt9t2" podUID="c80fcc5d-6ee3-49bd-829a-1182185f052a" Apr 17 02:49:01.365174 kubelet[2789]: E0417 02:49:01.362746 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:49:01.508298 kubelet[2789]: E0417 02:49:01.504219 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:49:02.470521 kubelet[2789]: E0417 02:49:02.470152 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:49:02.560616 kubelet[2789]: E0417 02:49:02.559509 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:49:04.675260 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Apr 17 02:49:05.531080 kubelet[2789]: E0417 02:49:05.530986 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:49:19.955041 kubelet[2789]: E0417 02:49:19.954124 2789 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:49:21.082338 sshd[4997]: Connection closed by 10.0.0.1 port 42546 Apr 17 02:49:21.085725 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Apr 17 02:49:21.187733 systemd[1]: sshd@47-10.0.0.6:22-10.0.0.1:42546.service: Deactivated successfully. Apr 17 02:49:21.251152 systemd[1]: session-48.scope: Deactivated successfully. Apr 17 02:49:21.254407 systemd[1]: session-48.scope: Consumed 3.906s CPU time, 25.6M memory peak. Apr 17 02:49:21.272635 systemd-logind[1585]: Session 48 logged out. Waiting for processes to exit. Apr 17 02:49:21.343601 systemd-logind[1585]: Removed session 48.