Apr 17 23:55:49.892675 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:55:49.892693 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:55:49.892702 kernel: BIOS-provided physical RAM map: Apr 17 23:55:49.892707 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:55:49.892711 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 17 23:55:49.892715 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 17 23:55:49.892720 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 17 23:55:49.892725 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 17 23:55:49.892729 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 17 23:55:49.892733 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 17 23:55:49.892786 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 17 23:55:49.892791 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 17 23:55:49.892795 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 17 23:55:49.892799 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 17 23:55:49.892805 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 17 23:55:49.892810 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 17 23:55:49.892816 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 17 23:55:49.892820 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 17 23:55:49.892825 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 17 23:55:49.892829 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:55:49.892834 kernel: NX (Execute Disable) protection: active Apr 17 23:55:49.892838 kernel: APIC: Static calls initialized Apr 17 23:55:49.892843 kernel: efi: EFI v2.7 by EDK II Apr 17 23:55:49.892847 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 17 23:55:49.892852 kernel: SMBIOS 2.8 present. Apr 17 23:55:49.892856 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 17 23:55:49.892861 kernel: Hypervisor detected: KVM Apr 17 23:55:49.892867 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:55:49.892871 kernel: kvm-clock: using sched offset of 6518077853 cycles Apr 17 23:55:49.892877 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:55:49.892881 kernel: tsc: Detected 2793.438 MHz processor Apr 17 23:55:49.892886 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:55:49.892891 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:55:49.892896 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 17 23:55:49.892901 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:55:49.892906 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:55:49.892912 kernel: Using GB pages for direct mapping Apr 17 23:55:49.892917 kernel: Secure boot disabled Apr 17 23:55:49.892921 kernel: ACPI: Early table checksum verification disabled Apr 17 23:55:49.892926 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 17 23:55:49.892934 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 17 23:55:49.892939 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:55:49.892944 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:55:49.892950 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 17 23:55:49.892955 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:55:49.892960 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:55:49.892965 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:55:49.892970 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:55:49.892975 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 23:55:49.892980 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 17 23:55:49.892986 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 17 23:55:49.892991 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 17 23:55:49.892996 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 17 23:55:49.893001 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 17 23:55:49.893006 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 17 23:55:49.893011 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 17 23:55:49.893016 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 17 23:55:49.893021 kernel: No NUMA configuration found Apr 17 23:55:49.893026 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 17 23:55:49.893032 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 17 23:55:49.893037 kernel: Zone ranges: Apr 17 23:55:49.893042 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:55:49.893047 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 17 23:55:49.893052 kernel: Normal empty Apr 17 23:55:49.893057 kernel: Movable zone start for each node Apr 17 23:55:49.893062 kernel: Early memory node ranges Apr 17 23:55:49.893067 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:55:49.893072 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 17 23:55:49.893077 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 17 23:55:49.893084 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 17 23:55:49.893088 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 17 23:55:49.893093 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 17 23:55:49.893098 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 17 23:55:49.893103 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:55:49.893108 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:55:49.893113 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 17 23:55:49.893118 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:55:49.893123 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 17 23:55:49.893129 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:55:49.893134 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 17 23:55:49.893139 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:55:49.893144 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:55:49.893149 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:55:49.893154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:55:49.893159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:55:49.893164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:55:49.893169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:55:49.893175 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:55:49.893180 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:55:49.893185 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:55:49.893190 kernel: TSC deadline timer available Apr 17 23:55:49.893195 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 17 23:55:49.893200 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:55:49.893205 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:55:49.893210 kernel: kvm-guest: setup PV sched yield Apr 17 23:55:49.893215 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 17 23:55:49.893220 kernel: Booting paravirtualized kernel on KVM Apr 17 23:55:49.893226 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:55:49.893231 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 23:55:49.893236 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 17 23:55:49.893242 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 17 23:55:49.893246 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 23:55:49.893251 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:55:49.893256 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:55:49.893262 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:55:49.893269 kernel: random: crng init done Apr 17 23:55:49.893274 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:55:49.893279 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:55:49.893284 kernel: Fallback order for Node 0: 0 Apr 17 23:55:49.893289 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 17 23:55:49.893294 kernel: Policy zone: DMA32 Apr 17 23:55:49.893299 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:55:49.893304 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 167136K reserved, 0K cma-reserved) Apr 17 23:55:49.893309 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 23:55:49.893316 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:55:49.893321 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:55:49.893326 kernel: Dynamic Preempt: voluntary Apr 17 23:55:49.893331 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:55:49.893344 kernel: rcu: RCU event tracing is enabled. Apr 17 23:55:49.893351 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 23:55:49.893356 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:55:49.893362 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:55:49.893367 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:55:49.893373 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:55:49.893378 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 23:55:49.893385 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 23:55:49.893391 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:55:49.893396 kernel: Console: colour dummy device 80x25 Apr 17 23:55:49.893402 kernel: printk: console [ttyS0] enabled Apr 17 23:55:49.893407 kernel: ACPI: Core revision 20230628 Apr 17 23:55:49.893413 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:55:49.893420 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:55:49.893425 kernel: x2apic enabled Apr 17 23:55:49.893430 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:55:49.893436 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:55:49.893442 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:55:49.893447 kernel: kvm-guest: setup PV IPIs Apr 17 23:55:49.893452 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:55:49.893458 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:55:49.893464 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 23:55:49.893471 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:55:49.893476 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:55:49.893482 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:55:49.893487 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:55:49.893492 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:55:49.893498 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:55:49.893504 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:55:49.893509 kernel: RETBleed: Vulnerable Apr 17 23:55:49.893516 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:55:49.893522 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:55:49.893527 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:55:49.893533 kernel: active return thunk: its_return_thunk Apr 17 23:55:49.893538 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:55:49.893544 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:55:49.893549 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:55:49.893555 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:55:49.893560 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:55:49.893587 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:55:49.893593 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:55:49.893598 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:55:49.893603 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:55:49.893609 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:55:49.893614 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:55:49.893619 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:55:49.893625 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:55:49.893630 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:55:49.893638 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:55:49.893643 kernel: landlock: Up and running. Apr 17 23:55:49.893648 kernel: SELinux: Initializing. Apr 17 23:55:49.893654 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:55:49.893659 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:55:49.893665 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 23:55:49.893670 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:55:49.893676 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:55:49.893681 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:55:49.893689 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 23:55:49.893694 kernel: signal: max sigframe size: 3632 Apr 17 23:55:49.893700 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:55:49.893705 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:55:49.893711 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:55:49.893716 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:55:49.893721 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:55:49.893727 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 23:55:49.893732 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 23:55:49.893765 kernel: smpboot: Max logical packages: 1 Apr 17 23:55:49.893771 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 23:55:49.893776 kernel: devtmpfs: initialized Apr 17 23:55:49.893782 kernel: x86/mm: Memory block size: 128MB Apr 17 23:55:49.893787 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 17 23:55:49.893793 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 17 23:55:49.893798 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 17 23:55:49.893804 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 17 23:55:49.893809 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 17 23:55:49.893817 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:55:49.893823 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 23:55:49.893828 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:55:49.893833 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:55:49.893839 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:55:49.893844 kernel: audit: type=2000 audit(1776470149.573:1): state=initialized audit_enabled=0 res=1 Apr 17 23:55:49.893850 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:55:49.893855 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:55:49.893861 kernel: cpuidle: using governor menu Apr 17 23:55:49.893868 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:55:49.893874 kernel: dca service started, version 1.12.1 Apr 17 23:55:49.893880 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:55:49.893885 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:55:49.893891 kernel: PCI: Using configuration type 1 for base access Apr 17 23:55:49.893896 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:55:49.893902 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:55:49.893907 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:55:49.893913 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:55:49.893920 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:55:49.893925 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:55:49.893931 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:55:49.893936 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:55:49.893941 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:55:49.893947 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:55:49.893952 kernel: ACPI: Interpreter enabled Apr 17 23:55:49.893958 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:55:49.893963 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:55:49.893970 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:55:49.893976 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:55:49.893981 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:55:49.893987 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:55:49.894093 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:55:49.894156 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:55:49.894211 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:55:49.894220 kernel: PCI host bridge to bus 0000:00 Apr 17 23:55:49.894280 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:55:49.894332 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:55:49.894382 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:55:49.894430 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 23:55:49.894479 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:55:49.894528 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 17 23:55:49.894609 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:55:49.894678 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:55:49.894776 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:55:49.894838 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 17 23:55:49.894894 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 17 23:55:49.894949 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:55:49.895004 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 17 23:55:49.895062 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:55:49.895124 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:55:49.895181 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 17 23:55:49.895237 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 17 23:55:49.895293 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 17 23:55:49.895354 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:55:49.895413 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 17 23:55:49.895469 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 17 23:55:49.895524 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 17 23:55:49.895615 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:55:49.895672 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 17 23:55:49.895729 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 17 23:55:49.895822 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 17 23:55:49.895881 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 17 23:55:49.895941 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:55:49.895997 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:55:49.896057 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:55:49.896112 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 17 23:55:49.896167 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 17 23:55:49.896227 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:55:49.896287 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 17 23:55:49.896294 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:55:49.896300 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:55:49.896305 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:55:49.896311 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:55:49.896316 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:55:49.896322 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:55:49.896327 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:55:49.896335 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:55:49.896340 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:55:49.896346 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:55:49.896351 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:55:49.896357 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:55:49.896363 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:55:49.896368 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:55:49.896374 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:55:49.896379 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:55:49.896387 kernel: iommu: Default domain type: Translated Apr 17 23:55:49.896392 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:55:49.896398 kernel: efivars: Registered efivars operations Apr 17 23:55:49.896403 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:55:49.896409 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:55:49.896414 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 17 23:55:49.896420 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 17 23:55:49.896425 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 17 23:55:49.896431 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 17 23:55:49.896488 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:55:49.896543 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:55:49.896625 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:55:49.896633 kernel: vgaarb: loaded Apr 17 23:55:49.896639 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:55:49.896644 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:55:49.896650 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:55:49.896655 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:55:49.896661 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:55:49.896669 kernel: pnp: PnP ACPI init Apr 17 23:55:49.896730 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:55:49.896769 kernel: pnp: PnP ACPI: found 6 devices Apr 17 23:55:49.896776 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:55:49.896781 kernel: NET: Registered PF_INET protocol family Apr 17 23:55:49.896787 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:55:49.896793 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:55:49.896799 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:55:49.896807 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:55:49.896813 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:55:49.896818 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:55:49.896824 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:55:49.896830 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:55:49.896836 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:55:49.896841 kernel: NET: Registered PF_XDP protocol family Apr 17 23:55:49.896902 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 17 23:55:49.896961 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 17 23:55:49.897015 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:55:49.897065 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:55:49.897115 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:55:49.897165 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 23:55:49.897215 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:55:49.897263 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 17 23:55:49.897270 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:55:49.897278 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:55:49.897284 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:55:49.897289 kernel: Initialise system trusted keyrings Apr 17 23:55:49.897295 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:55:49.897300 kernel: Key type asymmetric registered Apr 17 23:55:49.897306 kernel: Asymmetric key parser 'x509' registered Apr 17 23:55:49.897311 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:55:49.897317 kernel: io scheduler mq-deadline registered Apr 17 23:55:49.897323 kernel: io scheduler kyber registered Apr 17 23:55:49.897330 kernel: io scheduler bfq registered Apr 17 23:55:49.897335 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:55:49.897341 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:55:49.897347 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:55:49.897353 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:55:49.897358 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:55:49.897364 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:55:49.897370 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:55:49.897375 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:55:49.897382 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:55:49.897439 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 23:55:49.897447 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:55:49.897497 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 23:55:49.897548 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T23:55:49 UTC (1776470149) Apr 17 23:55:49.897625 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 17 23:55:49.897633 kernel: intel_pstate: CPU model not supported Apr 17 23:55:49.897638 kernel: efifb: probing for efifb Apr 17 23:55:49.897646 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 17 23:55:49.897651 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 17 23:55:49.897657 kernel: efifb: scrolling: redraw Apr 17 23:55:49.897662 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 17 23:55:49.897668 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:55:49.897674 kernel: fb0: EFI VGA frame buffer device Apr 17 23:55:49.897692 kernel: pstore: Using crash dump compression: deflate Apr 17 23:55:49.897699 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:55:49.897705 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:55:49.897712 kernel: Segment Routing with IPv6 Apr 17 23:55:49.897718 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:55:49.897723 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:55:49.897729 kernel: Key type dns_resolver registered Apr 17 23:55:49.897735 kernel: IPI shorthand broadcast: enabled Apr 17 23:55:49.897774 kernel: sched_clock: Marking stable (824012633, 326888743)->(1262066063, -111164687) Apr 17 23:55:49.897780 kernel: registered taskstats version 1 Apr 17 23:55:49.897786 kernel: Loading compiled-in X.509 certificates Apr 17 23:55:49.897792 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:55:49.897800 kernel: Key type .fscrypt registered Apr 17 23:55:49.897805 kernel: Key type fscrypt-provisioning registered Apr 17 23:55:49.897811 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:55:49.897816 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:55:49.897822 kernel: ima: No architecture policies found Apr 17 23:55:49.897828 kernel: clk: Disabling unused clocks Apr 17 23:55:49.897835 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:55:49.897840 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:55:49.897846 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:55:49.897853 kernel: Run /init as init process Apr 17 23:55:49.897859 kernel: with arguments: Apr 17 23:55:49.897865 kernel: /init Apr 17 23:55:49.897870 kernel: with environment: Apr 17 23:55:49.897876 kernel: HOME=/ Apr 17 23:55:49.897881 kernel: TERM=linux Apr 17 23:55:49.897889 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:55:49.897897 systemd[1]: Detected virtualization kvm. Apr 17 23:55:49.897904 systemd[1]: Detected architecture x86-64. Apr 17 23:55:49.897910 systemd[1]: Running in initrd. Apr 17 23:55:49.897916 systemd[1]: No hostname configured, using default hostname. Apr 17 23:55:49.897922 systemd[1]: Hostname set to . Apr 17 23:55:49.897929 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:55:49.897936 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:55:49.897942 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:55:49.897948 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:55:49.897955 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:55:49.897961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:55:49.897968 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:55:49.897974 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:55:49.897983 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:55:49.897989 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:55:49.897995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:55:49.898001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:55:49.898007 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:55:49.898013 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:55:49.898019 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:55:49.898025 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:55:49.898032 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:55:49.898038 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:55:49.898045 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:55:49.898051 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:55:49.898057 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:55:49.898063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:55:49.898070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:55:49.898076 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:55:49.898082 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:55:49.898089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:55:49.898097 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:55:49.898103 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:55:49.898109 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:55:49.898115 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:55:49.898121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:55:49.898141 systemd-journald[194]: Collecting audit messages is disabled. Apr 17 23:55:49.898159 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:55:49.898165 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:55:49.898172 systemd-journald[194]: Journal started Apr 17 23:55:49.898189 systemd-journald[194]: Runtime Journal (/run/log/journal/e9a6ae10490b4108a1a848f0f09af3dd) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:55:49.904805 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:55:49.906653 systemd-modules-load[195]: Inserted module 'overlay' Apr 17 23:55:49.906849 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:55:49.915883 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:55:49.919957 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:55:49.927015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:55:49.930117 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:55:49.937291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:55:49.938607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:55:49.946186 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:55:49.954802 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:55:49.958039 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 17 23:55:49.961463 kernel: Bridge firewalling registered Apr 17 23:55:49.959096 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:55:49.961705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:55:49.964422 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:55:49.965029 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:55:49.970281 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:55:49.978659 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:55:49.981536 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:55:49.990541 dracut-cmdline[227]: dracut-dracut-053 Apr 17 23:55:49.992630 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:55:50.002890 systemd-resolved[236]: Positive Trust Anchors: Apr 17 23:55:50.002918 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:55:50.002943 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:55:50.004877 systemd-resolved[236]: Defaulting to hostname 'linux'. Apr 17 23:55:50.005515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:55:50.006709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:55:50.074806 kernel: SCSI subsystem initialized Apr 17 23:55:50.083811 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:55:50.093927 kernel: iscsi: registered transport (tcp) Apr 17 23:55:50.112257 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:55:50.112307 kernel: QLogic iSCSI HBA Driver Apr 17 23:55:50.146246 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:55:50.157901 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:55:50.183890 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:55:50.183945 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:55:50.185674 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:55:50.224822 kernel: raid6: avx512x4 gen() 44868 MB/s Apr 17 23:55:50.241816 kernel: raid6: avx512x2 gen() 41333 MB/s Apr 17 23:55:50.258815 kernel: raid6: avx512x1 gen() 41009 MB/s Apr 17 23:55:50.275808 kernel: raid6: avx2x4 gen() 33679 MB/s Apr 17 23:55:50.292804 kernel: raid6: avx2x2 gen() 36332 MB/s Apr 17 23:55:50.310766 kernel: raid6: avx2x1 gen() 28633 MB/s Apr 17 23:55:50.310839 kernel: raid6: using algorithm avx512x4 gen() 44868 MB/s Apr 17 23:55:50.328888 kernel: raid6: .... xor() 10005 MB/s, rmw enabled Apr 17 23:55:50.328950 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:55:50.349077 kernel: xor: automatically using best checksumming function avx Apr 17 23:55:50.489826 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:55:50.500347 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:55:50.507977 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:55:50.517190 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 17 23:55:50.519897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:55:50.534044 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:55:50.545109 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Apr 17 23:55:50.567878 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:55:50.581977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:55:50.612254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:55:50.622945 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:55:50.632451 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:55:50.636937 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:55:50.641417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:55:50.645534 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:55:50.654966 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:55:50.654993 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 23:55:50.660043 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 23:55:50.659896 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:55:50.666383 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:55:50.670496 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:55:50.670513 kernel: GPT:9289727 != 19775487 Apr 17 23:55:50.670521 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:55:50.670528 kernel: GPT:9289727 != 19775487 Apr 17 23:55:50.670535 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:55:50.670542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:55:50.666489 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:55:50.676474 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:55:50.687704 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:55:50.687727 kernel: AES CTR mode by8 optimization enabled Apr 17 23:55:50.680949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:55:50.681145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:55:50.687634 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:55:50.701780 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (464) Apr 17 23:55:50.708686 kernel: libata version 3.00 loaded. Apr 17 23:55:50.708715 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (481) Apr 17 23:55:50.706992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:55:50.712594 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:55:50.720806 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:55:50.720944 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:55:50.724825 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:55:50.724959 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:55:50.728505 kernel: scsi host0: ahci Apr 17 23:55:50.729411 kernel: scsi host1: ahci Apr 17 23:55:50.729490 kernel: scsi host2: ahci Apr 17 23:55:50.731542 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:55:50.737653 kernel: scsi host3: ahci Apr 17 23:55:50.737829 kernel: scsi host4: ahci Apr 17 23:55:50.737904 kernel: scsi host5: ahci Apr 17 23:55:50.739058 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:55:50.753518 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 17 23:55:50.753540 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 17 23:55:50.753549 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 17 23:55:50.753557 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 17 23:55:50.753564 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 17 23:55:50.753594 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 17 23:55:50.750646 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:55:50.753416 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:55:50.761036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:55:50.770913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:55:50.771011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:55:50.771054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:55:50.785551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:55:50.776948 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:55:50.787837 disk-uuid[571]: Primary Header is updated. Apr 17 23:55:50.787837 disk-uuid[571]: Secondary Entries is updated. Apr 17 23:55:50.787837 disk-uuid[571]: Secondary Header is updated. Apr 17 23:55:50.793239 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:55:50.781186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:55:50.796926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:55:50.800880 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:55:50.819724 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:55:51.052789 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:55:51.052875 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:55:51.054799 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 23:55:51.055791 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:55:51.057802 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:55:51.058822 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:55:51.060823 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 23:55:51.062002 kernel: ata3.00: applying bridge limits Apr 17 23:55:51.063136 kernel: ata3.00: configured for UDMA/100 Apr 17 23:55:51.063795 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:55:51.106974 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 23:55:51.107217 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:55:51.127806 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:55:51.791802 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:55:51.792412 disk-uuid[573]: The operation has completed successfully. Apr 17 23:55:51.818642 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:55:51.818812 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:55:51.835951 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:55:51.842828 sh[601]: Success Apr 17 23:55:51.854802 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:55:51.882930 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:55:51.899355 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:55:51.903823 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:55:51.912653 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:55:51.912679 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:55:51.912688 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:55:51.914385 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:55:51.915671 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:55:51.921835 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:55:51.924666 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:55:51.937903 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:55:51.938639 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:55:51.950091 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:55:51.950128 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:55:51.950145 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:55:51.954796 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:55:51.961714 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:55:51.964774 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:55:51.971556 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:55:51.978940 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:55:52.020144 ignition[700]: Ignition 2.19.0 Apr 17 23:55:52.020161 ignition[700]: Stage: fetch-offline Apr 17 23:55:52.020187 ignition[700]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:55:52.020194 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:55:52.020261 ignition[700]: parsed url from cmdline: "" Apr 17 23:55:52.020263 ignition[700]: no config URL provided Apr 17 23:55:52.020267 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:55:52.020271 ignition[700]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:55:52.020287 ignition[700]: op(1): [started] loading QEMU firmware config module Apr 17 23:55:52.020291 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 23:55:52.026410 ignition[700]: op(1): [finished] loading QEMU firmware config module Apr 17 23:55:52.038694 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:55:52.049959 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:55:52.066414 systemd-networkd[789]: lo: Link UP Apr 17 23:55:52.066441 systemd-networkd[789]: lo: Gained carrier Apr 17 23:55:52.067303 systemd-networkd[789]: Enumeration completed Apr 17 23:55:52.067800 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:55:52.068048 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:55:52.068050 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:55:52.070356 systemd-networkd[789]: eth0: Link UP Apr 17 23:55:52.070359 systemd-networkd[789]: eth0: Gained carrier Apr 17 23:55:52.070364 systemd[1]: Reached target network.target - Network. Apr 17 23:55:52.070365 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:55:52.104836 systemd-networkd[789]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:55:52.201060 ignition[700]: parsing config with SHA512: 6b7557c58f7f915e5d5401117620434fc2bd9508d1fc14b42ffcbbcf130e9be13bb35b7aa55eafed8a7abfa6ed0e6b7f63b8794980c4eb3c621aedef888ef7ce Apr 17 23:55:52.204770 unknown[700]: fetched base config from "system" Apr 17 23:55:52.204780 unknown[700]: fetched user config from "qemu" Apr 17 23:55:52.205121 ignition[700]: fetch-offline: fetch-offline passed Apr 17 23:55:52.205173 ignition[700]: Ignition finished successfully Apr 17 23:55:52.211491 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:55:52.215320 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 23:55:52.232977 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:55:52.245601 ignition[793]: Ignition 2.19.0 Apr 17 23:55:52.245620 ignition[793]: Stage: kargs Apr 17 23:55:52.245801 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:55:52.245809 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:55:52.246390 ignition[793]: kargs: kargs passed Apr 17 23:55:52.246420 ignition[793]: Ignition finished successfully Apr 17 23:55:52.253169 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:55:52.270987 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:55:52.281102 ignition[801]: Ignition 2.19.0 Apr 17 23:55:52.281123 ignition[801]: Stage: disks Apr 17 23:55:52.281243 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:55:52.281249 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:55:52.281953 ignition[801]: disks: disks passed Apr 17 23:55:52.281983 ignition[801]: Ignition finished successfully Apr 17 23:55:52.288049 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:55:52.292325 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:55:52.292429 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:55:52.296265 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:55:52.301854 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:55:52.304989 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:55:52.318974 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:55:52.330409 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:55:52.334789 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:55:52.337963 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:55:52.422778 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:55:52.423047 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:55:52.426471 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:55:52.443889 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:55:52.448448 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:55:52.448729 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:55:52.457413 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (820) Apr 17 23:55:52.448792 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:55:52.466949 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:55:52.466967 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:55:52.466976 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:55:52.466984 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:55:52.448810 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:55:52.468091 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:55:52.472598 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:55:52.474304 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:55:52.508217 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:55:52.513612 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:55:52.517510 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:55:52.522728 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:55:52.599872 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:55:52.620021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:55:52.625902 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:55:52.629380 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:55:52.649298 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:55:52.652499 ignition[933]: INFO : Ignition 2.19.0 Apr 17 23:55:52.652499 ignition[933]: INFO : Stage: mount Apr 17 23:55:52.652499 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:55:52.652499 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:55:52.652499 ignition[933]: INFO : mount: mount passed Apr 17 23:55:52.652499 ignition[933]: INFO : Ignition finished successfully Apr 17 23:55:52.652643 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:55:52.674861 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:55:52.910649 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:55:52.923009 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:55:52.931898 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (947) Apr 17 23:55:52.931927 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:55:52.931936 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:55:52.933372 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:55:52.938786 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:55:52.939087 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:55:52.958320 ignition[964]: INFO : Ignition 2.19.0 Apr 17 23:55:52.960297 ignition[964]: INFO : Stage: files Apr 17 23:55:52.960297 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:55:52.960297 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:55:52.960297 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:55:52.969622 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:55:52.969622 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:55:52.977538 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:55:52.980881 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:55:52.984960 unknown[964]: wrote ssh authorized keys file for user: core Apr 17 23:55:52.987906 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:55:52.987906 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:55:52.987906 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:55:53.048519 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:55:53.139226 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:55:53.139226 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:55:53.145998 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 17 23:55:53.368913 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:55:53.425991 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:55:53.425991 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:55:53.431884 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:55:53.431884 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:55:53.437781 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:55:53.440647 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:55:53.443603 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:55:53.446528 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:55:53.449865 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:55:53.452967 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:55:53.456084 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:55:53.458934 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:55:53.463203 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:55:53.463203 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:55:53.470845 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 23:55:53.692642 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:55:53.771918 systemd-networkd[789]: eth0: Gained IPv6LL Apr 17 23:55:54.151512 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:55:54.151512 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 17 23:55:54.157799 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:55:54.157799 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:55:54.157799 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 17 23:55:54.157799 ignition[964]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 17 23:55:54.157799 ignition[964]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:55:54.157799 ignition[964]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:55:54.157799 ignition[964]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 17 23:55:54.157799 ignition[964]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 23:55:54.180776 ignition[964]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:55:54.184374 ignition[964]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:55:54.187087 ignition[964]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 23:55:54.187087 ignition[964]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:55:54.187087 ignition[964]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:55:54.187087 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:55:54.187087 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:55:54.187087 ignition[964]: INFO : files: files passed Apr 17 23:55:54.187087 ignition[964]: INFO : Ignition finished successfully Apr 17 23:55:54.194631 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:55:54.208002 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:55:54.214313 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:55:54.214606 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:55:54.214683 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:55:54.228050 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 23:55:54.232544 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:55:54.232544 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:55:54.235545 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:55:54.234280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:55:54.240533 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:55:54.253004 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:55:54.270735 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:55:54.270901 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:55:54.272728 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:55:54.276569 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:55:54.279888 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:55:54.280472 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:55:54.307568 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:55:54.326918 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:55:54.334559 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:55:54.334793 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:55:54.342422 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:55:54.342556 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:55:54.342685 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:55:54.347814 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:55:54.349148 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:55:54.352648 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:55:54.357484 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:55:54.358992 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:55:54.362501 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:55:54.366061 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:55:54.373098 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:55:54.376858 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:55:54.378490 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:55:54.381571 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:55:54.381700 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:55:54.387384 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:55:54.389327 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:55:54.397157 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:55:54.398996 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:55:54.399146 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:55:54.399279 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:55:54.407300 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:55:54.407462 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:55:54.409270 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:55:54.414364 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:55:54.420865 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:55:54.425530 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:55:54.427206 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:55:54.428935 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:55:54.429007 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:55:54.432007 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:55:54.432066 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:55:54.435295 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:55:54.435377 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:55:54.438776 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:55:54.438864 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:55:54.455024 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:55:54.458867 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:55:54.460387 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:55:54.460503 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:55:54.462838 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:55:54.462902 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:55:54.475436 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:55:54.475505 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:55:54.479206 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:55:54.484611 ignition[1018]: INFO : Ignition 2.19.0 Apr 17 23:55:54.484611 ignition[1018]: INFO : Stage: umount Apr 17 23:55:54.484611 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:55:54.484611 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:55:54.484611 ignition[1018]: INFO : umount: umount passed Apr 17 23:55:54.484611 ignition[1018]: INFO : Ignition finished successfully Apr 17 23:55:54.484672 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:55:54.484936 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:55:54.485361 systemd[1]: Stopped target network.target - Network. Apr 17 23:55:54.488727 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:55:54.488825 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:55:54.508872 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:55:54.508964 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:55:54.512453 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:55:54.512499 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:55:54.516190 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:55:54.516243 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:55:54.518318 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:55:54.521652 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:55:54.533870 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:55:54.533982 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:55:54.535819 systemd-networkd[789]: eth0: DHCPv6 lease lost Apr 17 23:55:54.541061 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:55:54.541174 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:55:54.547037 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:55:54.548927 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:55:54.553106 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:55:54.553159 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:55:54.556764 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:55:54.556805 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:55:54.580460 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:55:54.582507 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:55:54.582572 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:55:54.586670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:55:54.586711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:55:54.590136 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:55:54.590174 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:55:54.592415 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:55:54.592451 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:55:54.597909 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:55:54.611647 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:55:54.611875 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:55:54.618392 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:55:54.618562 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:55:54.620370 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:55:54.620419 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:55:54.624638 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:55:54.624666 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:55:54.629370 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:55:54.629411 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:55:54.637568 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:55:54.637631 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:55:54.644171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:55:54.644230 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:55:54.657914 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:55:54.658011 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:55:54.658046 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:55:54.661980 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:55:54.662030 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:55:54.665716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:55:54.665807 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:55:54.671732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:55:54.671813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:55:54.676307 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:55:54.676415 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:55:54.681531 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:55:54.687396 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:55:54.701651 systemd[1]: Switching root. Apr 17 23:55:54.736619 systemd-journald[194]: Journal stopped Apr 17 23:55:55.495878 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 17 23:55:55.495959 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:55:55.495973 kernel: SELinux: policy capability open_perms=1 Apr 17 23:55:55.495986 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:55:55.495999 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:55:55.496006 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:55:55.496014 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:55:55.496021 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:55:55.496028 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:55:55.496036 kernel: audit: type=1403 audit(1776470154.859:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:55:55.496047 systemd[1]: Successfully loaded SELinux policy in 32.612ms. Apr 17 23:55:55.496060 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.905ms. Apr 17 23:55:55.496069 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:55:55.496079 systemd[1]: Detected virtualization kvm. Apr 17 23:55:55.496088 systemd[1]: Detected architecture x86-64. Apr 17 23:55:55.496095 systemd[1]: Detected first boot. Apr 17 23:55:55.496104 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:55:55.496112 zram_generator::config[1063]: No configuration found. Apr 17 23:55:55.496122 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:55:55.496130 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:55:55.496138 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:55:55.496149 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:55:55.496158 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:55:55.496165 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:55:55.496173 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:55:55.496180 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:55:55.496189 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:55:55.496197 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:55:55.496205 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:55:55.496213 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:55:55.496223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:55:55.496230 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:55:55.496238 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:55:55.496246 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:55:55.496253 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:55:55.496261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:55:55.496269 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:55:55.496276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:55:55.496285 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:55:55.496294 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:55:55.496302 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:55:55.496311 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:55:55.496319 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:55:55.496326 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:55:55.496334 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:55:55.496341 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:55:55.496349 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:55:55.496358 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:55:55.496366 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:55:55.496373 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:55:55.496382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:55:55.496394 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:55:55.496407 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:55:55.496420 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:55:55.496428 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:55:55.496438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:55:55.496447 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:55:55.496460 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:55:55.496473 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:55:55.496494 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:55:55.496502 systemd[1]: Reached target machines.target - Containers. Apr 17 23:55:55.496510 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:55:55.496518 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:55:55.496526 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:55:55.496535 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:55:55.496544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:55:55.496552 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:55:55.496559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:55:55.496567 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:55:55.496575 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:55:55.496609 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:55:55.496620 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:55:55.496630 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:55:55.496638 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:55:55.496646 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:55:55.496654 kernel: fuse: init (API version 7.39) Apr 17 23:55:55.496661 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:55:55.496669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:55:55.496678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:55:55.496685 kernel: loop: module loaded Apr 17 23:55:55.496692 kernel: ACPI: bus type drm_connector registered Apr 17 23:55:55.496701 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:55:55.496709 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:55:55.496717 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:55:55.496724 systemd[1]: Stopped verity-setup.service. Apr 17 23:55:55.496783 systemd-journald[1147]: Collecting audit messages is disabled. Apr 17 23:55:55.496803 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:55:55.496812 systemd-journald[1147]: Journal started Apr 17 23:55:55.496832 systemd-journald[1147]: Runtime Journal (/run/log/journal/e9a6ae10490b4108a1a848f0f09af3dd) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:55:55.193117 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:55:55.214204 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:55:55.214564 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:55:55.503018 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:55:55.504252 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:55:55.506157 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:55:55.508564 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:55:55.511352 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:55:55.514093 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:55:55.517266 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:55:55.520059 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:55:55.522517 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:55:55.525014 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:55:55.525209 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:55:55.527490 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:55:55.527883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:55:55.530144 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:55:55.530309 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:55:55.532906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:55:55.533041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:55:55.535843 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:55:55.535997 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:55:55.538146 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:55:55.538279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:55:55.540348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:55:55.542505 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:55:55.544958 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:55:55.547273 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:55:55.558305 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:55:55.568941 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:55:55.572710 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:55:55.574915 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:55:55.574960 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:55:55.578042 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:55:55.581221 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:55:55.584865 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:55:55.586683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:55:55.587816 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:55:55.590544 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:55:55.592668 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:55:55.593393 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:55:55.595890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:55:55.596689 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:55:55.602941 systemd-journald[1147]: Time spent on flushing to /var/log/journal/e9a6ae10490b4108a1a848f0f09af3dd is 13.736ms for 1000 entries. Apr 17 23:55:55.602941 systemd-journald[1147]: System Journal (/var/log/journal/e9a6ae10490b4108a1a848f0f09af3dd) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:55:55.630481 systemd-journald[1147]: Received client request to flush runtime journal. Apr 17 23:55:55.600934 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:55:55.604021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:55:55.610218 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:55:55.615452 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:55:55.617815 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:55:55.623791 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:55:55.626310 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:55:55.633811 kernel: loop0: detected capacity change from 0 to 219192 Apr 17 23:55:55.637920 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:55:55.642458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:55:55.649145 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:55:55.651408 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Apr 17 23:55:55.651435 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Apr 17 23:55:55.659051 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:55:55.661795 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:55:55.663873 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:55:55.668020 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:55:55.670643 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:55:55.679223 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:55:55.679692 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:55:55.693788 kernel: loop1: detected capacity change from 0 to 142488 Apr 17 23:55:55.694246 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:55:55.703882 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:55:55.725841 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 17 23:55:55.725875 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 17 23:55:55.730634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:55:55.745811 kernel: loop2: detected capacity change from 0 to 140768 Apr 17 23:55:55.784875 kernel: loop3: detected capacity change from 0 to 219192 Apr 17 23:55:55.808780 kernel: loop4: detected capacity change from 0 to 142488 Apr 17 23:55:55.824854 kernel: loop5: detected capacity change from 0 to 140768 Apr 17 23:55:55.832615 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 23:55:55.833633 (sd-merge)[1205]: Merged extensions into '/usr'. Apr 17 23:55:55.836486 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:55:55.836574 systemd[1]: Reloading... Apr 17 23:55:55.887789 zram_generator::config[1230]: No configuration found. Apr 17 23:55:55.907074 ldconfig[1173]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:55:55.961132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:55:55.991993 systemd[1]: Reloading finished in 155 ms. Apr 17 23:55:56.030803 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:55:56.033335 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:55:56.035880 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:55:56.055104 systemd[1]: Starting ensure-sysext.service... Apr 17 23:55:56.057683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:55:56.060835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:55:56.065425 systemd[1]: Reloading requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:55:56.065435 systemd[1]: Reloading... Apr 17 23:55:56.074900 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:55:56.075158 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:55:56.075692 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:55:56.075920 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 17 23:55:56.075975 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 17 23:55:56.077627 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:55:56.077646 systemd-tmpfiles[1271]: Skipping /boot Apr 17 23:55:56.082329 systemd-udevd[1272]: Using default interface naming scheme 'v255'. Apr 17 23:55:56.083259 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:55:56.083266 systemd-tmpfiles[1271]: Skipping /boot Apr 17 23:55:56.102891 zram_generator::config[1298]: No configuration found. Apr 17 23:55:56.136191 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1311) Apr 17 23:55:56.176800 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 17 23:55:56.180855 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:55:56.193339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:55:56.200849 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:55:56.200930 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 17 23:55:56.215828 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:55:56.215935 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:55:56.216029 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:55:56.232272 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:55:56.265865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:55:56.268815 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:55:56.268902 systemd[1]: Reloading finished in 203 ms. Apr 17 23:55:56.314921 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:55:56.353210 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:55:56.369710 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:55:56.375982 systemd[1]: Finished ensure-sysext.service. Apr 17 23:55:56.390025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:55:56.406164 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:55:56.409843 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:55:56.412110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:55:56.413096 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:55:56.418914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:55:56.423652 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:55:56.429006 lvm[1372]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:55:56.426931 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:55:56.429813 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:55:56.431786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:55:56.432913 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:55:56.439496 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:55:56.443679 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:55:56.448287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:55:56.451910 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:55:56.454574 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:55:56.459330 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:55:56.461528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:55:56.462180 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:55:56.464944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:55:56.465084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:55:56.465149 augenrules[1399]: No rules Apr 17 23:55:56.467784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:55:56.470558 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:55:56.470717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:55:56.472829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:55:56.472949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:55:56.475370 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:55:56.475463 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:55:56.475712 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:55:56.476339 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:55:56.482324 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:55:56.488922 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:55:56.491281 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:55:56.491325 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:55:56.492465 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:55:56.493816 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:55:56.496819 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:55:56.494460 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:55:56.495457 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:55:56.499092 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:55:56.511176 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:55:56.515494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:55:56.521099 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:55:56.523447 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:55:56.561882 systemd-networkd[1391]: lo: Link UP Apr 17 23:55:56.561904 systemd-networkd[1391]: lo: Gained carrier Apr 17 23:55:56.562815 systemd-networkd[1391]: Enumeration completed Apr 17 23:55:56.562898 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:55:56.564213 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:55:56.564216 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:55:56.566221 systemd-networkd[1391]: eth0: Link UP Apr 17 23:55:56.566227 systemd-networkd[1391]: eth0: Gained carrier Apr 17 23:55:56.566237 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:55:56.573208 systemd-resolved[1394]: Positive Trust Anchors: Apr 17 23:55:56.573239 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:55:56.573264 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:55:56.576228 systemd-resolved[1394]: Defaulting to hostname 'linux'. Apr 17 23:55:56.577935 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:55:56.580385 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:55:56.582462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:55:56.582808 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:55:56.583567 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Apr 17 23:55:57.754394 systemd-resolved[1394]: Clock change detected. Flushing caches. Apr 17 23:55:57.754442 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 23:55:57.754486 systemd-timesyncd[1395]: Initial clock synchronization to Fri 2026-04-17 23:55:57.754329 UTC. Apr 17 23:55:57.754621 systemd[1]: Reached target network.target - Network. Apr 17 23:55:57.756222 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:55:57.758219 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:55:57.760045 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:55:57.762097 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:55:57.764184 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:55:57.766258 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:55:57.766293 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:55:57.767793 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:55:57.769589 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:55:57.771431 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:55:57.773519 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:55:57.775399 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:55:57.778342 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:55:57.795572 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:55:57.798483 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:55:57.800385 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:55:57.802030 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:55:57.803609 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:55:57.803641 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:55:57.804553 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:55:57.807149 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:55:57.809527 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:55:57.813839 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:55:57.815644 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:55:57.816773 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:55:57.820972 jq[1439]: false Apr 17 23:55:57.821093 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:55:57.824457 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:55:57.827926 dbus-daemon[1438]: [system] SELinux support is enabled Apr 17 23:55:57.828189 extend-filesystems[1440]: Found loop3 Apr 17 23:55:57.828366 extend-filesystems[1440]: Found loop4 Apr 17 23:55:57.828366 extend-filesystems[1440]: Found loop5 Apr 17 23:55:57.828366 extend-filesystems[1440]: Found sr0 Apr 17 23:55:57.828366 extend-filesystems[1440]: Found vda Apr 17 23:55:57.828366 extend-filesystems[1440]: Found vda1 Apr 17 23:55:57.828366 extend-filesystems[1440]: Found vda2 Apr 17 23:55:57.828366 extend-filesystems[1440]: Found vda3 Apr 17 23:55:57.828366 extend-filesystems[1440]: Found usr Apr 17 23:55:57.828366 extend-filesystems[1440]: Found vda4 Apr 17 23:55:57.828366 extend-filesystems[1440]: Found vda6 Apr 17 23:55:57.828366 extend-filesystems[1440]: Found vda7 Apr 17 23:55:57.828772 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:55:57.847841 extend-filesystems[1440]: Found vda9 Apr 17 23:55:57.847841 extend-filesystems[1440]: Checking size of /dev/vda9 Apr 17 23:55:57.833590 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:55:57.837998 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:55:57.838258 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:55:57.852610 jq[1457]: true Apr 17 23:55:57.838911 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:55:57.852833 update_engine[1453]: I20260417 23:55:57.852573 1453 main.cc:92] Flatcar Update Engine starting Apr 17 23:55:57.844473 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:55:57.848081 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:55:57.853483 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:55:57.857760 update_engine[1453]: I20260417 23:55:57.855837 1453 update_check_scheduler.cc:74] Next update check in 5m31s Apr 17 23:55:57.857782 extend-filesystems[1440]: Resized partition /dev/vda9 Apr 17 23:55:57.860825 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 23:55:57.854645 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:55:57.860876 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:55:57.854890 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:55:57.855804 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:55:57.858352 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:55:57.858466 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:55:57.873168 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:55:57.878689 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1324) Apr 17 23:55:57.882749 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 23:55:57.885815 tar[1463]: linux-amd64/LICENSE Apr 17 23:55:57.887125 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:55:57.889165 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:55:57.897192 jq[1464]: true Apr 17 23:55:57.900283 tar[1463]: linux-amd64/helm Apr 17 23:55:57.889184 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:55:57.900341 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:55:57.900341 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 23:55:57.900341 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 23:55:57.891978 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:55:57.926942 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Apr 17 23:55:57.891991 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:55:57.899785 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:55:57.901879 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:55:57.901999 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:55:57.918927 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:55:57.927951 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:55:57.927962 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:55:57.929824 systemd-logind[1449]: New seat seat0. Apr 17 23:55:57.933088 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:55:57.950546 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:55:57.952269 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:55:57.955780 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:55:57.991788 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:55:58.012307 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:55:58.021900 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:55:58.027782 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:55:58.027926 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:55:58.031073 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:55:58.041725 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:55:58.045580 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:55:58.048292 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:55:58.050349 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:55:58.053419 containerd[1468]: time="2026-04-17T23:55:58.053346398Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:55:58.075209 containerd[1468]: time="2026-04-17T23:55:58.075005294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:55:58.076708 containerd[1468]: time="2026-04-17T23:55:58.076425643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:55:58.076708 containerd[1468]: time="2026-04-17T23:55:58.076448771Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:55:58.076708 containerd[1468]: time="2026-04-17T23:55:58.076460931Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:55:58.076708 containerd[1468]: time="2026-04-17T23:55:58.076602465Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:55:58.076708 containerd[1468]: time="2026-04-17T23:55:58.076614834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:55:58.076708 containerd[1468]: time="2026-04-17T23:55:58.076692861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:55:58.076708 containerd[1468]: time="2026-04-17T23:55:58.076702294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:55:58.076850 containerd[1468]: time="2026-04-17T23:55:58.076833562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:55:58.076850 containerd[1468]: time="2026-04-17T23:55:58.076843427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:55:58.076876 containerd[1468]: time="2026-04-17T23:55:58.076851859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:55:58.076876 containerd[1468]: time="2026-04-17T23:55:58.076858404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:55:58.076951 containerd[1468]: time="2026-04-17T23:55:58.076909339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:55:58.077104 containerd[1468]: time="2026-04-17T23:55:58.077069367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:55:58.077196 containerd[1468]: time="2026-04-17T23:55:58.077162699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:55:58.077196 containerd[1468]: time="2026-04-17T23:55:58.077188781Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:55:58.077268 containerd[1468]: time="2026-04-17T23:55:58.077247659Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:55:58.077312 containerd[1468]: time="2026-04-17T23:55:58.077292782Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:55:58.083884 containerd[1468]: time="2026-04-17T23:55:58.083827284Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:55:58.083884 containerd[1468]: time="2026-04-17T23:55:58.083891017Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:55:58.083983 containerd[1468]: time="2026-04-17T23:55:58.083905759Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:55:58.083983 containerd[1468]: time="2026-04-17T23:55:58.083918287Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:55:58.083983 containerd[1468]: time="2026-04-17T23:55:58.083928047Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:55:58.084047 containerd[1468]: time="2026-04-17T23:55:58.084034693Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:55:58.084266 containerd[1468]: time="2026-04-17T23:55:58.084225204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:55:58.084353 containerd[1468]: time="2026-04-17T23:55:58.084316926Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:55:58.084372 containerd[1468]: time="2026-04-17T23:55:58.084350877Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:55:58.084372 containerd[1468]: time="2026-04-17T23:55:58.084361312Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:55:58.084372 containerd[1468]: time="2026-04-17T23:55:58.084370793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:55:58.084421 containerd[1468]: time="2026-04-17T23:55:58.084380833Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:55:58.084421 containerd[1468]: time="2026-04-17T23:55:58.084390013Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:55:58.084421 containerd[1468]: time="2026-04-17T23:55:58.084399370Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:55:58.084421 containerd[1468]: time="2026-04-17T23:55:58.084409136Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:55:58.084421 containerd[1468]: time="2026-04-17T23:55:58.084417793Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:55:58.084490 containerd[1468]: time="2026-04-17T23:55:58.084426199Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:55:58.084490 containerd[1468]: time="2026-04-17T23:55:58.084434183Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:55:58.084490 containerd[1468]: time="2026-04-17T23:55:58.084448399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084490 containerd[1468]: time="2026-04-17T23:55:58.084461840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084490 containerd[1468]: time="2026-04-17T23:55:58.084470598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084490 containerd[1468]: time="2026-04-17T23:55:58.084480937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084490 containerd[1468]: time="2026-04-17T23:55:58.084490202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084499571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084533755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084546327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084556031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084566611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084574864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084583108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084591995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084602469Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084616957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084624 containerd[1468]: time="2026-04-17T23:55:58.084625237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084861 containerd[1468]: time="2026-04-17T23:55:58.084633686Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:55:58.084861 containerd[1468]: time="2026-04-17T23:55:58.084704133Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:55:58.084861 containerd[1468]: time="2026-04-17T23:55:58.084718344Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:55:58.084861 containerd[1468]: time="2026-04-17T23:55:58.084726098Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:55:58.084861 containerd[1468]: time="2026-04-17T23:55:58.084735721Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:55:58.084861 containerd[1468]: time="2026-04-17T23:55:58.084742963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.084861 containerd[1468]: time="2026-04-17T23:55:58.084754316Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:55:58.084861 containerd[1468]: time="2026-04-17T23:55:58.084761436Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:55:58.084861 containerd[1468]: time="2026-04-17T23:55:58.084768250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:55:58.085049 containerd[1468]: time="2026-04-17T23:55:58.084973232Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:55:58.085049 containerd[1468]: time="2026-04-17T23:55:58.085014100Z" level=info msg="Connect containerd service" Apr 17 23:55:58.085216 containerd[1468]: time="2026-04-17T23:55:58.085090917Z" level=info msg="using legacy CRI server" Apr 17 23:55:58.085216 containerd[1468]: time="2026-04-17T23:55:58.085097295Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:55:58.085272 containerd[1468]: time="2026-04-17T23:55:58.085246854Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:55:58.085807 containerd[1468]: time="2026-04-17T23:55:58.085777803Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:55:58.085959 containerd[1468]: time="2026-04-17T23:55:58.085923803Z" level=info msg="Start subscribing containerd event" Apr 17 23:55:58.085993 containerd[1468]: time="2026-04-17T23:55:58.085975487Z" level=info msg="Start recovering state" Apr 17 23:55:58.086163 containerd[1468]: time="2026-04-17T23:55:58.086027475Z" level=info msg="Start event monitor" Apr 17 23:55:58.086163 containerd[1468]: time="2026-04-17T23:55:58.086041369Z" level=info msg="Start snapshots syncer" Apr 17 23:55:58.086163 containerd[1468]: time="2026-04-17T23:55:58.086049092Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:55:58.086163 containerd[1468]: time="2026-04-17T23:55:58.086054080Z" level=info msg="Start streaming server" Apr 17 23:55:58.086354 containerd[1468]: time="2026-04-17T23:55:58.086329372Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:55:58.086394 containerd[1468]: time="2026-04-17T23:55:58.086375232Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:55:58.086539 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:55:58.088503 containerd[1468]: time="2026-04-17T23:55:58.086695089Z" level=info msg="containerd successfully booted in 0.033959s" Apr 17 23:55:58.293006 tar[1463]: linux-amd64/README.md Apr 17 23:55:58.308554 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:55:58.910064 systemd-networkd[1391]: eth0: Gained IPv6LL Apr 17 23:55:58.912595 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:55:58.915172 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:55:58.926075 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 23:55:58.929559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:55:58.932936 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:55:58.947185 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 23:55:58.947340 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 23:55:58.949935 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:55:58.955188 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:55:59.586089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:55:59.588566 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:55:59.590102 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:55:59.591844 systemd[1]: Startup finished in 941ms (kernel) + 5.167s (initrd) + 3.593s (userspace) = 9.702s. Apr 17 23:55:59.966568 kubelet[1549]: E0417 23:55:59.966383 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:55:59.968359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:55:59.968558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:56:04.183580 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:56:04.184686 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:54880.service - OpenSSH per-connection server daemon (10.0.0.1:54880). Apr 17 23:56:04.232775 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 54880 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:04.234882 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:04.242806 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:56:04.249103 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:56:04.252596 systemd-logind[1449]: New session 1 of user core. Apr 17 23:56:04.264954 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:56:04.267903 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:56:04.275078 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:56:04.353183 systemd[1566]: Queued start job for default target default.target. Apr 17 23:56:04.361820 systemd[1566]: Created slice app.slice - User Application Slice. Apr 17 23:56:04.361870 systemd[1566]: Reached target paths.target - Paths. Apr 17 23:56:04.361885 systemd[1566]: Reached target timers.target - Timers. Apr 17 23:56:04.363106 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:56:04.372408 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:56:04.372488 systemd[1566]: Reached target sockets.target - Sockets. Apr 17 23:56:04.372502 systemd[1566]: Reached target basic.target - Basic System. Apr 17 23:56:04.372537 systemd[1566]: Reached target default.target - Main User Target. Apr 17 23:56:04.372591 systemd[1566]: Startup finished in 91ms. Apr 17 23:56:04.372768 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:56:04.374040 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:56:04.433225 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:54892.service - OpenSSH per-connection server daemon (10.0.0.1:54892). Apr 17 23:56:04.476148 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 54892 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:04.477703 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:04.481774 systemd-logind[1449]: New session 2 of user core. Apr 17 23:56:04.491974 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:56:04.545513 sshd[1577]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:04.555961 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:54892.service: Deactivated successfully. Apr 17 23:56:04.557297 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:56:04.558266 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:56:04.559183 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:54896.service - OpenSSH per-connection server daemon (10.0.0.1:54896). Apr 17 23:56:04.559766 systemd-logind[1449]: Removed session 2. Apr 17 23:56:04.593823 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 54896 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:04.595044 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:04.598484 systemd-logind[1449]: New session 3 of user core. Apr 17 23:56:04.615018 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:56:04.663173 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:04.678112 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:54896.service: Deactivated successfully. Apr 17 23:56:04.679415 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:56:04.680410 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:56:04.687927 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:54902.service - OpenSSH per-connection server daemon (10.0.0.1:54902). Apr 17 23:56:04.688686 systemd-logind[1449]: Removed session 3. Apr 17 23:56:04.719456 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 54902 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:04.720614 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:04.724700 systemd-logind[1449]: New session 4 of user core. Apr 17 23:56:04.730835 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:56:04.782423 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:04.790982 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:54902.service: Deactivated successfully. Apr 17 23:56:04.792176 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:56:04.793129 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:56:04.794313 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:54904.service - OpenSSH per-connection server daemon (10.0.0.1:54904). Apr 17 23:56:04.795010 systemd-logind[1449]: Removed session 4. Apr 17 23:56:04.831814 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 54904 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:04.833306 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:04.837233 systemd-logind[1449]: New session 5 of user core. Apr 17 23:56:04.847814 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:56:04.904460 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:56:04.904746 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:56:04.918867 sudo[1601]: pam_unix(sudo:session): session closed for user root Apr 17 23:56:04.920838 sshd[1598]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:04.930159 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:54904.service: Deactivated successfully. Apr 17 23:56:04.931484 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:56:04.932575 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:56:04.933528 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:54918.service - OpenSSH per-connection server daemon (10.0.0.1:54918). Apr 17 23:56:04.934256 systemd-logind[1449]: Removed session 5. Apr 17 23:56:04.973512 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 54918 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:04.974898 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:04.978743 systemd-logind[1449]: New session 6 of user core. Apr 17 23:56:04.993048 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:56:05.046430 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:56:05.046788 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:56:05.050301 sudo[1610]: pam_unix(sudo:session): session closed for user root Apr 17 23:56:05.055186 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:56:05.055459 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:56:05.077006 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:56:05.078322 auditctl[1613]: No rules Apr 17 23:56:05.078609 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:56:05.078795 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:56:05.080694 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:56:05.107023 augenrules[1631]: No rules Apr 17 23:56:05.108387 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:56:05.109241 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 17 23:56:05.111014 sshd[1606]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:05.128216 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:54918.service: Deactivated successfully. Apr 17 23:56:05.129543 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:56:05.130742 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:56:05.138100 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:54928.service - OpenSSH per-connection server daemon (10.0.0.1:54928). Apr 17 23:56:05.138901 systemd-logind[1449]: Removed session 6. Apr 17 23:56:05.172215 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 54928 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:05.173724 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:05.177436 systemd-logind[1449]: New session 7 of user core. Apr 17 23:56:05.184992 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:56:05.236491 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:56:05.236798 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:56:05.483953 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:56:05.484002 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:56:05.723746 dockerd[1660]: time="2026-04-17T23:56:05.723615976Z" level=info msg="Starting up" Apr 17 23:56:05.865039 dockerd[1660]: time="2026-04-17T23:56:05.864901552Z" level=info msg="Loading containers: start." Apr 17 23:56:05.958702 kernel: Initializing XFRM netlink socket Apr 17 23:56:06.020448 systemd-networkd[1391]: docker0: Link UP Apr 17 23:56:06.038213 dockerd[1660]: time="2026-04-17T23:56:06.038166599Z" level=info msg="Loading containers: done." Apr 17 23:56:06.052457 dockerd[1660]: time="2026-04-17T23:56:06.052368610Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:56:06.052696 dockerd[1660]: time="2026-04-17T23:56:06.052487229Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:56:06.052696 dockerd[1660]: time="2026-04-17T23:56:06.052639160Z" level=info msg="Daemon has completed initialization" Apr 17 23:56:06.089436 dockerd[1660]: time="2026-04-17T23:56:06.089363152Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:56:06.089645 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:56:06.456683 containerd[1468]: time="2026-04-17T23:56:06.456626015Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 23:56:06.905215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount566230637.mount: Deactivated successfully. Apr 17 23:56:07.538118 containerd[1468]: time="2026-04-17T23:56:07.538031869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:07.539031 containerd[1468]: time="2026-04-17T23:56:07.538977919Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 17 23:56:07.540245 containerd[1468]: time="2026-04-17T23:56:07.540204158Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:07.543280 containerd[1468]: time="2026-04-17T23:56:07.543243688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:07.544440 containerd[1468]: time="2026-04-17T23:56:07.544407127Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.087703812s" Apr 17 23:56:07.544505 containerd[1468]: time="2026-04-17T23:56:07.544482400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 17 23:56:07.545173 containerd[1468]: time="2026-04-17T23:56:07.545156117Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 23:56:08.260157 containerd[1468]: time="2026-04-17T23:56:08.260107003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:08.261132 containerd[1468]: time="2026-04-17T23:56:08.260586060Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 17 23:56:08.261911 containerd[1468]: time="2026-04-17T23:56:08.261865898Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:08.264310 containerd[1468]: time="2026-04-17T23:56:08.264251135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:08.265229 containerd[1468]: time="2026-04-17T23:56:08.265199762Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 720.018321ms" Apr 17 23:56:08.265265 containerd[1468]: time="2026-04-17T23:56:08.265230036Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 17 23:56:08.265755 containerd[1468]: time="2026-04-17T23:56:08.265724863Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 23:56:08.820289 containerd[1468]: time="2026-04-17T23:56:08.820228411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:08.820934 containerd[1468]: time="2026-04-17T23:56:08.820903737Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 17 23:56:08.821917 containerd[1468]: time="2026-04-17T23:56:08.821874243Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:08.824359 containerd[1468]: time="2026-04-17T23:56:08.824308625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:08.825190 containerd[1468]: time="2026-04-17T23:56:08.825150220Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 559.394216ms" Apr 17 23:56:08.825190 containerd[1468]: time="2026-04-17T23:56:08.825183940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 17 23:56:08.825824 containerd[1468]: time="2026-04-17T23:56:08.825625475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 23:56:09.590428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573780404.mount: Deactivated successfully. Apr 17 23:56:09.776411 containerd[1468]: time="2026-04-17T23:56:09.776341402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:09.776963 containerd[1468]: time="2026-04-17T23:56:09.776918872Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 17 23:56:09.778142 containerd[1468]: time="2026-04-17T23:56:09.778103165Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:09.779821 containerd[1468]: time="2026-04-17T23:56:09.779779121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:09.780188 containerd[1468]: time="2026-04-17T23:56:09.780167326Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 954.520942ms" Apr 17 23:56:09.780216 containerd[1468]: time="2026-04-17T23:56:09.780193342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 17 23:56:09.780700 containerd[1468]: time="2026-04-17T23:56:09.780681259Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 23:56:10.109376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:56:10.114852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:56:10.115709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1769995487.mount: Deactivated successfully. Apr 17 23:56:10.212934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:56:10.216182 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:56:10.254562 kubelet[1894]: E0417 23:56:10.254510 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:56:10.257820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:56:10.257947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:56:10.803319 containerd[1468]: time="2026-04-17T23:56:10.803234022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:10.804269 containerd[1468]: time="2026-04-17T23:56:10.804235407Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 17 23:56:10.805166 containerd[1468]: time="2026-04-17T23:56:10.805134028Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:10.807724 containerd[1468]: time="2026-04-17T23:56:10.807689295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:10.808440 containerd[1468]: time="2026-04-17T23:56:10.808412466Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.0277059s" Apr 17 23:56:10.808440 containerd[1468]: time="2026-04-17T23:56:10.808438441Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 23:56:10.808970 containerd[1468]: time="2026-04-17T23:56:10.808950559Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:56:11.133278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469590425.mount: Deactivated successfully. Apr 17 23:56:11.138929 containerd[1468]: time="2026-04-17T23:56:11.138864760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:11.139535 containerd[1468]: time="2026-04-17T23:56:11.139481669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 17 23:56:11.141156 containerd[1468]: time="2026-04-17T23:56:11.141100711Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:11.143236 containerd[1468]: time="2026-04-17T23:56:11.143192458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:11.143807 containerd[1468]: time="2026-04-17T23:56:11.143762139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 334.784623ms" Apr 17 23:56:11.143807 containerd[1468]: time="2026-04-17T23:56:11.143796460Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:56:11.144311 containerd[1468]: time="2026-04-17T23:56:11.144296278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 23:56:11.476221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520045991.mount: Deactivated successfully. Apr 17 23:56:12.030462 containerd[1468]: time="2026-04-17T23:56:12.030408001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:12.031381 containerd[1468]: time="2026-04-17T23:56:12.031338561Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 17 23:56:12.032534 containerd[1468]: time="2026-04-17T23:56:12.032494018Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:12.035048 containerd[1468]: time="2026-04-17T23:56:12.035007081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:12.036409 containerd[1468]: time="2026-04-17T23:56:12.036329909Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 892.012766ms" Apr 17 23:56:12.036409 containerd[1468]: time="2026-04-17T23:56:12.036381370Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 23:56:14.914216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:56:14.924051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:56:14.945940 systemd[1]: Reloading requested from client PID 2047 ('systemctl') (unit session-7.scope)... Apr 17 23:56:14.945965 systemd[1]: Reloading... Apr 17 23:56:15.007746 zram_generator::config[2089]: No configuration found. Apr 17 23:56:15.086717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:56:15.133568 systemd[1]: Reloading finished in 187 ms. Apr 17 23:56:15.167266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:56:15.170366 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:56:15.171366 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:56:15.171574 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:56:15.174933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:56:15.177798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:56:15.272365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:56:15.276023 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:56:15.307520 kubelet[2137]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:56:15.307520 kubelet[2137]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:56:15.307872 kubelet[2137]: I0417 23:56:15.307544 2137 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:56:16.018421 kubelet[2137]: I0417 23:56:16.018349 2137 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:56:16.018421 kubelet[2137]: I0417 23:56:16.018389 2137 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:56:16.018421 kubelet[2137]: I0417 23:56:16.018417 2137 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:56:16.018421 kubelet[2137]: I0417 23:56:16.018425 2137 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:56:16.018815 kubelet[2137]: I0417 23:56:16.018772 2137 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:56:16.086688 kubelet[2137]: E0417 23:56:16.086601 2137 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:56:16.087360 kubelet[2137]: I0417 23:56:16.087344 2137 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:56:16.091271 kubelet[2137]: E0417 23:56:16.091161 2137 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:56:16.091400 kubelet[2137]: I0417 23:56:16.091353 2137 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:56:16.095059 kubelet[2137]: I0417 23:56:16.095024 2137 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:56:16.096054 kubelet[2137]: I0417 23:56:16.096013 2137 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:56:16.096188 kubelet[2137]: I0417 23:56:16.096049 2137 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:56:16.096188 kubelet[2137]: I0417 23:56:16.096184 2137 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:56:16.096298 kubelet[2137]: I0417 23:56:16.096191 2137 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:56:16.096298 kubelet[2137]: I0417 23:56:16.096271 2137 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:56:16.098325 kubelet[2137]: I0417 23:56:16.098291 2137 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:56:16.098471 kubelet[2137]: I0417 23:56:16.098442 2137 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:56:16.098471 kubelet[2137]: I0417 23:56:16.098464 2137 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:56:16.098504 kubelet[2137]: I0417 23:56:16.098480 2137 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:56:16.098504 kubelet[2137]: I0417 23:56:16.098490 2137 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:56:16.099091 kubelet[2137]: E0417 23:56:16.099043 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:56:16.099492 kubelet[2137]: E0417 23:56:16.099448 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:56:16.100228 kubelet[2137]: I0417 23:56:16.100185 2137 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:56:16.101394 kubelet[2137]: I0417 23:56:16.101211 2137 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:56:16.101394 kubelet[2137]: I0417 23:56:16.101236 2137 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:56:16.101394 kubelet[2137]: W0417 23:56:16.101307 2137 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:56:16.105520 kubelet[2137]: I0417 23:56:16.105483 2137 server.go:1262] "Started kubelet" Apr 17 23:56:16.105681 kubelet[2137]: I0417 23:56:16.105556 2137 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:56:16.107172 kubelet[2137]: I0417 23:56:16.106698 2137 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:56:16.107172 kubelet[2137]: I0417 23:56:16.106732 2137 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:56:16.107172 kubelet[2137]: I0417 23:56:16.106899 2137 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:56:16.107888 kubelet[2137]: I0417 23:56:16.107861 2137 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:56:16.107998 kubelet[2137]: I0417 23:56:16.107866 2137 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:56:16.108089 kubelet[2137]: I0417 23:56:16.108064 2137 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:56:16.109989 kubelet[2137]: E0417 23:56:16.108169 2137 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a74a3688d70ff9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:56:16.105435129 +0000 UTC m=+0.826440409,LastTimestamp:2026-04-17 23:56:16.105435129 +0000 UTC m=+0.826440409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:56:16.109989 kubelet[2137]: I0417 23:56:16.109466 2137 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:56:16.109989 kubelet[2137]: I0417 23:56:16.109533 2137 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:56:16.109989 kubelet[2137]: I0417 23:56:16.109629 2137 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:56:16.109989 kubelet[2137]: E0417 23:56:16.109728 2137 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:56:16.109989 kubelet[2137]: E0417 23:56:16.109801 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Apr 17 23:56:16.110258 kubelet[2137]: E0417 23:56:16.110192 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:56:16.110500 kubelet[2137]: I0417 23:56:16.110450 2137 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:56:16.110638 kubelet[2137]: I0417 23:56:16.110603 2137 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:56:16.111909 kubelet[2137]: I0417 23:56:16.111875 2137 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:56:16.111946 kubelet[2137]: E0417 23:56:16.111912 2137 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:56:16.121643 kubelet[2137]: I0417 23:56:16.121570 2137 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:56:16.121643 kubelet[2137]: I0417 23:56:16.121590 2137 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:56:16.121643 kubelet[2137]: I0417 23:56:16.121619 2137 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:56:16.123745 kubelet[2137]: I0417 23:56:16.123702 2137 policy_none.go:49] "None policy: Start" Apr 17 23:56:16.123745 kubelet[2137]: I0417 23:56:16.123729 2137 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:56:16.123745 kubelet[2137]: I0417 23:56:16.123740 2137 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:56:16.125338 kubelet[2137]: I0417 23:56:16.125314 2137 policy_none.go:47] "Start" Apr 17 23:56:16.126193 kubelet[2137]: I0417 23:56:16.126085 2137 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:56:16.127542 kubelet[2137]: I0417 23:56:16.127527 2137 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:56:16.128594 kubelet[2137]: I0417 23:56:16.127644 2137 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:56:16.128594 kubelet[2137]: I0417 23:56:16.127813 2137 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:56:16.128594 kubelet[2137]: E0417 23:56:16.127876 2137 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:56:16.128594 kubelet[2137]: E0417 23:56:16.128355 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:56:16.131257 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:56:16.147362 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:56:16.149583 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:56:16.164863 kubelet[2137]: E0417 23:56:16.164790 2137 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:56:16.165130 kubelet[2137]: I0417 23:56:16.165100 2137 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:56:16.165202 kubelet[2137]: I0417 23:56:16.165117 2137 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:56:16.165478 kubelet[2137]: I0417 23:56:16.165466 2137 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:56:16.166629 kubelet[2137]: E0417 23:56:16.166587 2137 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:56:16.166712 kubelet[2137]: E0417 23:56:16.166703 2137 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 23:56:16.240751 systemd[1]: Created slice kubepods-burstable-pod6ba5e9e349885470a8bee2b2bdc55cfd.slice - libcontainer container kubepods-burstable-pod6ba5e9e349885470a8bee2b2bdc55cfd.slice. Apr 17 23:56:16.248381 kubelet[2137]: E0417 23:56:16.248328 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:56:16.250312 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 17 23:56:16.267539 kubelet[2137]: I0417 23:56:16.267499 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:56:16.267539 kubelet[2137]: E0417 23:56:16.267543 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:56:16.267931 kubelet[2137]: E0417 23:56:16.267886 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Apr 17 23:56:16.269538 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 17 23:56:16.272368 kubelet[2137]: E0417 23:56:16.272345 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:56:16.310312 kubelet[2137]: E0417 23:56:16.310216 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Apr 17 23:56:16.310312 kubelet[2137]: I0417 23:56:16.310322 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ba5e9e349885470a8bee2b2bdc55cfd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ba5e9e349885470a8bee2b2bdc55cfd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:16.310781 kubelet[2137]: I0417 23:56:16.310342 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:16.310781 kubelet[2137]: I0417 23:56:16.310374 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:16.310781 kubelet[2137]: I0417 23:56:16.310388 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:56:16.310781 kubelet[2137]: I0417 23:56:16.310401 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ba5e9e349885470a8bee2b2bdc55cfd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ba5e9e349885470a8bee2b2bdc55cfd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:16.310781 kubelet[2137]: I0417 23:56:16.310418 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ba5e9e349885470a8bee2b2bdc55cfd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ba5e9e349885470a8bee2b2bdc55cfd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:16.310875 kubelet[2137]: I0417 23:56:16.310432 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:16.310875 kubelet[2137]: I0417 23:56:16.310447 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:16.310875 kubelet[2137]: I0417 23:56:16.310469 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:16.470260 kubelet[2137]: I0417 23:56:16.470016 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:56:16.470449 kubelet[2137]: E0417 23:56:16.470374 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Apr 17 23:56:16.553207 kubelet[2137]: E0417 23:56:16.553052 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:16.554376 containerd[1468]: time="2026-04-17T23:56:16.554322211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ba5e9e349885470a8bee2b2bdc55cfd,Namespace:kube-system,Attempt:0,}" Apr 17 23:56:16.570853 kubelet[2137]: E0417 23:56:16.570787 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:16.571279 containerd[1468]: time="2026-04-17T23:56:16.571219503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 17 23:56:16.575780 kubelet[2137]: E0417 23:56:16.575746 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:16.576088 containerd[1468]: time="2026-04-17T23:56:16.576060669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 17 23:56:16.711263 kubelet[2137]: E0417 23:56:16.711166 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Apr 17 23:56:16.872199 kubelet[2137]: I0417 23:56:16.872075 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:56:16.872431 kubelet[2137]: E0417 23:56:16.872376 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Apr 17 23:56:16.942015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410279266.mount: Deactivated successfully. Apr 17 23:56:16.947639 containerd[1468]: time="2026-04-17T23:56:16.947580686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:56:16.950503 containerd[1468]: time="2026-04-17T23:56:16.950447070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 17 23:56:16.951533 containerd[1468]: time="2026-04-17T23:56:16.951468147Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:56:16.952330 containerd[1468]: time="2026-04-17T23:56:16.952293376Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:56:16.952930 containerd[1468]: time="2026-04-17T23:56:16.952904253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:56:16.953887 containerd[1468]: time="2026-04-17T23:56:16.953853771Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:56:16.954780 containerd[1468]: time="2026-04-17T23:56:16.954759890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:56:16.956923 containerd[1468]: time="2026-04-17T23:56:16.956892268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:56:16.958002 containerd[1468]: time="2026-04-17T23:56:16.957950483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 403.545013ms" Apr 17 23:56:16.959335 containerd[1468]: time="2026-04-17T23:56:16.959281221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 387.984374ms" Apr 17 23:56:16.965250 containerd[1468]: time="2026-04-17T23:56:16.965219875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 389.100268ms" Apr 17 23:56:16.971861 kubelet[2137]: E0417 23:56:16.971827 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:56:17.050859 kubelet[2137]: E0417 23:56:17.050815 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:56:17.062227 containerd[1468]: time="2026-04-17T23:56:17.062111244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:56:17.062227 containerd[1468]: time="2026-04-17T23:56:17.062232114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:56:17.062416 containerd[1468]: time="2026-04-17T23:56:17.062258005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:17.062416 containerd[1468]: time="2026-04-17T23:56:17.062339635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:17.064508 containerd[1468]: time="2026-04-17T23:56:17.063738295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:56:17.064508 containerd[1468]: time="2026-04-17T23:56:17.063771115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:56:17.064508 containerd[1468]: time="2026-04-17T23:56:17.063782499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:17.064508 containerd[1468]: time="2026-04-17T23:56:17.063825494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:17.069231 containerd[1468]: time="2026-04-17T23:56:17.069018017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:56:17.070531 containerd[1468]: time="2026-04-17T23:56:17.070480240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:56:17.070579 containerd[1468]: time="2026-04-17T23:56:17.070553071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:17.070774 containerd[1468]: time="2026-04-17T23:56:17.070704614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:17.081845 systemd[1]: Started cri-containerd-f05fc7046b4f3b6f457101bbe56df639bfa99b90c37e04dec54e9a166acb9042.scope - libcontainer container f05fc7046b4f3b6f457101bbe56df639bfa99b90c37e04dec54e9a166acb9042. Apr 17 23:56:17.085440 systemd[1]: Started cri-containerd-12549bf5106d7a3c9f24226afd1df6b3ae57ad27a66fcaf1053c0791e9043be7.scope - libcontainer container 12549bf5106d7a3c9f24226afd1df6b3ae57ad27a66fcaf1053c0791e9043be7. Apr 17 23:56:17.086741 systemd[1]: Started cri-containerd-b7a759ff9968569db3fa46f957e832f459ea9da6b792879cfc7b9cdf3b132e97.scope - libcontainer container b7a759ff9968569db3fa46f957e832f459ea9da6b792879cfc7b9cdf3b132e97. Apr 17 23:56:17.119210 containerd[1468]: time="2026-04-17T23:56:17.119115547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ba5e9e349885470a8bee2b2bdc55cfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f05fc7046b4f3b6f457101bbe56df639bfa99b90c37e04dec54e9a166acb9042\"" Apr 17 23:56:17.120936 kubelet[2137]: E0417 23:56:17.120802 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:17.126048 containerd[1468]: time="2026-04-17T23:56:17.125777251Z" level=info msg="CreateContainer within sandbox \"f05fc7046b4f3b6f457101bbe56df639bfa99b90c37e04dec54e9a166acb9042\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:56:17.134230 containerd[1468]: time="2026-04-17T23:56:17.134200473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"12549bf5106d7a3c9f24226afd1df6b3ae57ad27a66fcaf1053c0791e9043be7\"" Apr 17 23:56:17.135298 containerd[1468]: time="2026-04-17T23:56:17.135127943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7a759ff9968569db3fa46f957e832f459ea9da6b792879cfc7b9cdf3b132e97\"" Apr 17 23:56:17.135477 kubelet[2137]: E0417 23:56:17.135192 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:17.136113 kubelet[2137]: E0417 23:56:17.136037 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:17.138809 containerd[1468]: time="2026-04-17T23:56:17.138686349Z" level=info msg="CreateContainer within sandbox \"12549bf5106d7a3c9f24226afd1df6b3ae57ad27a66fcaf1053c0791e9043be7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:56:17.140531 containerd[1468]: time="2026-04-17T23:56:17.140457386Z" level=info msg="CreateContainer within sandbox \"b7a759ff9968569db3fa46f957e832f459ea9da6b792879cfc7b9cdf3b132e97\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:56:17.144976 containerd[1468]: time="2026-04-17T23:56:17.144895078Z" level=info msg="CreateContainer within sandbox \"f05fc7046b4f3b6f457101bbe56df639bfa99b90c37e04dec54e9a166acb9042\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a6b1f1ee560d7612fd2e6f9309891b1ac4c1e64379a5389b791cf5106362fde\"" Apr 17 23:56:17.145468 containerd[1468]: time="2026-04-17T23:56:17.145423744Z" level=info msg="StartContainer for \"0a6b1f1ee560d7612fd2e6f9309891b1ac4c1e64379a5389b791cf5106362fde\"" Apr 17 23:56:17.155882 containerd[1468]: time="2026-04-17T23:56:17.155788222Z" level=info msg="CreateContainer within sandbox \"12549bf5106d7a3c9f24226afd1df6b3ae57ad27a66fcaf1053c0791e9043be7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e5782d7b61711fe4a5d47fbd78f6faa9f2509d386ff7d746cf437724c78c725\"" Apr 17 23:56:17.156340 containerd[1468]: time="2026-04-17T23:56:17.156323890Z" level=info msg="StartContainer for \"1e5782d7b61711fe4a5d47fbd78f6faa9f2509d386ff7d746cf437724c78c725\"" Apr 17 23:56:17.158955 containerd[1468]: time="2026-04-17T23:56:17.158894244Z" level=info msg="CreateContainer within sandbox \"b7a759ff9968569db3fa46f957e832f459ea9da6b792879cfc7b9cdf3b132e97\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"19999b4d18af12143c72e559047dbbc8050ed6ea548c444f3036b59ba77c080d\"" Apr 17 23:56:17.159470 containerd[1468]: time="2026-04-17T23:56:17.159455451Z" level=info msg="StartContainer for \"19999b4d18af12143c72e559047dbbc8050ed6ea548c444f3036b59ba77c080d\"" Apr 17 23:56:17.165825 kubelet[2137]: E0417 23:56:17.165776 2137 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:56:17.176085 systemd[1]: Started cri-containerd-0a6b1f1ee560d7612fd2e6f9309891b1ac4c1e64379a5389b791cf5106362fde.scope - libcontainer container 0a6b1f1ee560d7612fd2e6f9309891b1ac4c1e64379a5389b791cf5106362fde. Apr 17 23:56:17.179115 systemd[1]: Started cri-containerd-1e5782d7b61711fe4a5d47fbd78f6faa9f2509d386ff7d746cf437724c78c725.scope - libcontainer container 1e5782d7b61711fe4a5d47fbd78f6faa9f2509d386ff7d746cf437724c78c725. Apr 17 23:56:17.183280 systemd[1]: Started cri-containerd-19999b4d18af12143c72e559047dbbc8050ed6ea548c444f3036b59ba77c080d.scope - libcontainer container 19999b4d18af12143c72e559047dbbc8050ed6ea548c444f3036b59ba77c080d. Apr 17 23:56:17.215798 containerd[1468]: time="2026-04-17T23:56:17.215733617Z" level=info msg="StartContainer for \"0a6b1f1ee560d7612fd2e6f9309891b1ac4c1e64379a5389b791cf5106362fde\" returns successfully" Apr 17 23:56:17.225695 containerd[1468]: time="2026-04-17T23:56:17.225238810Z" level=info msg="StartContainer for \"1e5782d7b61711fe4a5d47fbd78f6faa9f2509d386ff7d746cf437724c78c725\" returns successfully" Apr 17 23:56:17.225695 containerd[1468]: time="2026-04-17T23:56:17.225309799Z" level=info msg="StartContainer for \"19999b4d18af12143c72e559047dbbc8050ed6ea548c444f3036b59ba77c080d\" returns successfully" Apr 17 23:56:17.675503 kubelet[2137]: I0417 23:56:17.675433 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:56:17.995686 kubelet[2137]: E0417 23:56:17.993334 2137 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 23:56:18.140736 kubelet[2137]: E0417 23:56:18.140640 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:56:18.140869 kubelet[2137]: E0417 23:56:18.140764 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:18.141513 kubelet[2137]: E0417 23:56:18.141499 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:56:18.141714 kubelet[2137]: E0417 23:56:18.141693 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:18.142386 kubelet[2137]: E0417 23:56:18.142369 2137 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:56:18.142473 kubelet[2137]: E0417 23:56:18.142460 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:18.202572 kubelet[2137]: I0417 23:56:18.202509 2137 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:56:18.202572 kubelet[2137]: E0417 23:56:18.202566 2137 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 23:56:18.211290 kubelet[2137]: E0417 23:56:18.211249 2137 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:56:18.311741 kubelet[2137]: E0417 23:56:18.311485 2137 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:56:18.412459 kubelet[2137]: E0417 23:56:18.412372 2137 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:56:18.513602 kubelet[2137]: E0417 23:56:18.513524 2137 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:56:18.613912 kubelet[2137]: E0417 23:56:18.613713 2137 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:56:18.714464 kubelet[2137]: E0417 23:56:18.714403 2137 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:56:18.815390 kubelet[2137]: E0417 23:56:18.815283 2137 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:56:18.911031 kubelet[2137]: I0417 23:56:18.910870 2137 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:56:18.917873 kubelet[2137]: E0417 23:56:18.917820 2137 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 23:56:18.917873 kubelet[2137]: I0417 23:56:18.917849 2137 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:18.919512 kubelet[2137]: E0417 23:56:18.919453 2137 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:18.919512 kubelet[2137]: I0417 23:56:18.919484 2137 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:18.920836 kubelet[2137]: E0417 23:56:18.920786 2137 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:19.101117 kubelet[2137]: I0417 23:56:19.101038 2137 apiserver.go:52] "Watching apiserver" Apr 17 23:56:19.110062 kubelet[2137]: I0417 23:56:19.109984 2137 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:56:19.143293 kubelet[2137]: I0417 23:56:19.143221 2137 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:56:19.143433 kubelet[2137]: I0417 23:56:19.143397 2137 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:19.147673 kubelet[2137]: E0417 23:56:19.147592 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:19.149368 kubelet[2137]: E0417 23:56:19.149062 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:20.012908 kubelet[2137]: I0417 23:56:20.012849 2137 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:20.019996 kubelet[2137]: E0417 23:56:20.019936 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:20.145562 kubelet[2137]: E0417 23:56:20.145538 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:20.147177 kubelet[2137]: E0417 23:56:20.146471 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:20.147177 kubelet[2137]: E0417 23:56:20.146602 2137 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:20.215613 systemd[1]: Reloading requested from client PID 2424 ('systemctl') (unit session-7.scope)... Apr 17 23:56:20.215711 systemd[1]: Reloading... Apr 17 23:56:20.275710 zram_generator::config[2466]: No configuration found. Apr 17 23:56:20.354383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:56:20.410512 systemd[1]: Reloading finished in 194 ms. Apr 17 23:56:20.444248 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:56:20.464420 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:56:20.464817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:56:20.464888 systemd[1]: kubelet.service: Consumed 1.132s CPU time, 125.4M memory peak, 0B memory swap peak. Apr 17 23:56:20.474268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:56:20.584790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:56:20.588166 (kubelet)[2508]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:56:20.633948 kubelet[2508]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:56:20.633948 kubelet[2508]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:56:20.633948 kubelet[2508]: I0417 23:56:20.633816 2508 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:56:20.642237 kubelet[2508]: I0417 23:56:20.642129 2508 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:56:20.642398 kubelet[2508]: I0417 23:56:20.642264 2508 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:56:20.642398 kubelet[2508]: I0417 23:56:20.642316 2508 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:56:20.642398 kubelet[2508]: I0417 23:56:20.642330 2508 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:56:20.642600 kubelet[2508]: I0417 23:56:20.642554 2508 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:56:20.644071 kubelet[2508]: I0417 23:56:20.644012 2508 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:56:20.646498 kubelet[2508]: I0417 23:56:20.646417 2508 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:56:20.649393 kubelet[2508]: E0417 23:56:20.649311 2508 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:56:20.649393 kubelet[2508]: I0417 23:56:20.649351 2508 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:56:20.655280 kubelet[2508]: I0417 23:56:20.655192 2508 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:56:20.655579 kubelet[2508]: I0417 23:56:20.655527 2508 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:56:20.655800 kubelet[2508]: I0417 23:56:20.655556 2508 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:56:20.656025 kubelet[2508]: I0417 23:56:20.655869 2508 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:56:20.656025 kubelet[2508]: I0417 23:56:20.655892 2508 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:56:20.656025 kubelet[2508]: I0417 23:56:20.655973 2508 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:56:20.656254 kubelet[2508]: I0417 23:56:20.656216 2508 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:56:20.656494 kubelet[2508]: I0417 23:56:20.656458 2508 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:56:20.657545 kubelet[2508]: I0417 23:56:20.656782 2508 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:56:20.657545 kubelet[2508]: I0417 23:56:20.656809 2508 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:56:20.657545 kubelet[2508]: I0417 23:56:20.656821 2508 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:56:20.661033 kubelet[2508]: I0417 23:56:20.660984 2508 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:56:20.662123 kubelet[2508]: I0417 23:56:20.662055 2508 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:56:20.662123 kubelet[2508]: I0417 23:56:20.662117 2508 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:56:20.667337 kubelet[2508]: I0417 23:56:20.664747 2508 server.go:1262] "Started kubelet" Apr 17 23:56:20.667337 kubelet[2508]: I0417 23:56:20.666314 2508 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:56:20.668034 kubelet[2508]: I0417 23:56:20.667968 2508 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:56:20.668117 kubelet[2508]: I0417 23:56:20.668076 2508 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:56:20.668117 kubelet[2508]: I0417 23:56:20.668103 2508 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:56:20.668255 kubelet[2508]: I0417 23:56:20.668219 2508 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:56:20.671729 kubelet[2508]: E0417 23:56:20.668616 2508 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:56:20.671729 kubelet[2508]: I0417 23:56:20.668854 2508 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:56:20.671729 kubelet[2508]: I0417 23:56:20.668913 2508 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:56:20.671729 kubelet[2508]: I0417 23:56:20.670567 2508 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:56:20.673986 kubelet[2508]: I0417 23:56:20.673624 2508 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:56:20.678060 kubelet[2508]: I0417 23:56:20.678037 2508 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:56:20.678173 kubelet[2508]: I0417 23:56:20.678165 2508 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:56:20.678308 kubelet[2508]: I0417 23:56:20.678301 2508 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:56:20.678517 kubelet[2508]: I0417 23:56:20.678505 2508 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:56:20.684572 kubelet[2508]: I0417 23:56:20.684520 2508 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:56:20.686250 kubelet[2508]: I0417 23:56:20.686214 2508 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:56:20.686250 kubelet[2508]: I0417 23:56:20.686241 2508 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:56:20.686332 kubelet[2508]: I0417 23:56:20.686258 2508 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:56:20.686332 kubelet[2508]: E0417 23:56:20.686292 2508 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:56:20.706722 kubelet[2508]: I0417 23:56:20.706610 2508 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:56:20.706901 kubelet[2508]: I0417 23:56:20.706892 2508 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:56:20.706940 kubelet[2508]: I0417 23:56:20.706936 2508 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:56:20.707058 kubelet[2508]: I0417 23:56:20.707051 2508 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:56:20.707106 kubelet[2508]: I0417 23:56:20.707093 2508 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:56:20.707131 kubelet[2508]: I0417 23:56:20.707128 2508 policy_none.go:49] "None policy: Start" Apr 17 23:56:20.707166 kubelet[2508]: I0417 23:56:20.707162 2508 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:56:20.707201 kubelet[2508]: I0417 23:56:20.707195 2508 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:56:20.707289 kubelet[2508]: I0417 23:56:20.707284 2508 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:56:20.707317 kubelet[2508]: I0417 23:56:20.707314 2508 policy_none.go:47] "Start" Apr 17 23:56:20.711319 kubelet[2508]: E0417 23:56:20.711280 2508 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:56:20.711446 kubelet[2508]: I0417 23:56:20.711427 2508 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:56:20.711522 kubelet[2508]: I0417 23:56:20.711446 2508 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:56:20.711598 kubelet[2508]: I0417 23:56:20.711580 2508 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:56:20.712511 kubelet[2508]: E0417 23:56:20.712465 2508 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:56:20.787242 kubelet[2508]: I0417 23:56:20.787204 2508 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:20.787417 kubelet[2508]: I0417 23:56:20.787392 2508 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:20.787510 kubelet[2508]: I0417 23:56:20.787290 2508 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:56:20.795074 kubelet[2508]: E0417 23:56:20.795032 2508 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:20.795309 kubelet[2508]: E0417 23:56:20.795160 2508 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:56:20.795309 kubelet[2508]: E0417 23:56:20.795219 2508 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:20.816111 kubelet[2508]: I0417 23:56:20.815996 2508 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:56:20.822367 kubelet[2508]: I0417 23:56:20.822296 2508 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 23:56:20.822367 kubelet[2508]: I0417 23:56:20.822377 2508 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:56:20.869001 kubelet[2508]: I0417 23:56:20.868914 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:20.869001 kubelet[2508]: I0417 23:56:20.868957 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:20.869001 kubelet[2508]: I0417 23:56:20.868976 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ba5e9e349885470a8bee2b2bdc55cfd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ba5e9e349885470a8bee2b2bdc55cfd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:20.869001 kubelet[2508]: I0417 23:56:20.868989 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:20.869001 kubelet[2508]: I0417 23:56:20.869000 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:56:20.869367 kubelet[2508]: I0417 23:56:20.869011 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ba5e9e349885470a8bee2b2bdc55cfd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ba5e9e349885470a8bee2b2bdc55cfd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:20.869367 kubelet[2508]: I0417 23:56:20.869021 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ba5e9e349885470a8bee2b2bdc55cfd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ba5e9e349885470a8bee2b2bdc55cfd\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:20.869367 kubelet[2508]: I0417 23:56:20.869032 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:20.869367 kubelet[2508]: I0417 23:56:20.869043 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:56:21.095935 kubelet[2508]: E0417 23:56:21.095626 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:21.095935 kubelet[2508]: E0417 23:56:21.095705 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:21.096620 kubelet[2508]: E0417 23:56:21.095707 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:21.220570 sudo[2552]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 17 23:56:21.221032 sudo[2552]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 17 23:56:21.657444 kubelet[2508]: I0417 23:56:21.657389 2508 apiserver.go:52] "Watching apiserver" Apr 17 23:56:21.669282 kubelet[2508]: I0417 23:56:21.669235 2508 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:56:21.697261 kubelet[2508]: I0417 23:56:21.696850 2508 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:56:21.697261 kubelet[2508]: E0417 23:56:21.696956 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:21.697261 kubelet[2508]: I0417 23:56:21.697130 2508 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:21.707536 kubelet[2508]: E0417 23:56:21.707467 2508 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:56:21.707765 kubelet[2508]: E0417 23:56:21.707608 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:21.708127 kubelet[2508]: E0417 23:56:21.708056 2508 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:56:21.708207 kubelet[2508]: E0417 23:56:21.708160 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:21.715943 sudo[2552]: pam_unix(sudo:session): session closed for user root Apr 17 23:56:21.721275 kubelet[2508]: I0417 23:56:21.721031 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.7210071559999998 podStartE2EDuration="2.721007156s" podCreationTimestamp="2026-04-17 23:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:56:21.719251468 +0000 UTC m=+1.127547833" watchObservedRunningTime="2026-04-17 23:56:21.721007156 +0000 UTC m=+1.129303518" Apr 17 23:56:21.726443 kubelet[2508]: I0417 23:56:21.726372 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7263586659999999 podStartE2EDuration="1.726358666s" podCreationTimestamp="2026-04-17 23:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:56:21.726100102 +0000 UTC m=+1.134396448" watchObservedRunningTime="2026-04-17 23:56:21.726358666 +0000 UTC m=+1.134655017" Apr 17 23:56:21.741459 kubelet[2508]: I0417 23:56:21.741164 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.741148789 podStartE2EDuration="2.741148789s" podCreationTimestamp="2026-04-17 23:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:56:21.733152525 +0000 UTC m=+1.141448869" watchObservedRunningTime="2026-04-17 23:56:21.741148789 +0000 UTC m=+1.149445138" Apr 17 23:56:22.699187 kubelet[2508]: E0417 23:56:22.699096 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:22.699187 kubelet[2508]: E0417 23:56:22.699139 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:23.027347 sudo[1642]: pam_unix(sudo:session): session closed for user root Apr 17 23:56:23.028818 sshd[1639]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:23.032806 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:54928.service: Deactivated successfully. Apr 17 23:56:23.034280 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:56:23.034445 systemd[1]: session-7.scope: Consumed 5.002s CPU time, 158.8M memory peak, 0B memory swap peak. Apr 17 23:56:23.035105 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:56:23.036056 systemd-logind[1449]: Removed session 7. Apr 17 23:56:23.701162 kubelet[2508]: E0417 23:56:23.701112 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:27.434807 kubelet[2508]: I0417 23:56:27.434761 2508 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:56:27.435195 containerd[1468]: time="2026-04-17T23:56:27.435164605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:56:27.435376 kubelet[2508]: I0417 23:56:27.435352 2508 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:56:27.470920 kubelet[2508]: E0417 23:56:27.470856 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:27.709612 kubelet[2508]: E0417 23:56:27.709391 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:28.102725 kubelet[2508]: E0417 23:56:28.102490 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:28.498729 systemd[1]: Created slice kubepods-besteffort-pod6a4365a1_677b_41ff_ad77_a105ec9cde43.slice - libcontainer container kubepods-besteffort-pod6a4365a1_677b_41ff_ad77_a105ec9cde43.slice. Apr 17 23:56:28.513513 systemd[1]: Created slice kubepods-burstable-pod925c32f5_58fb_421f_aeb4_ec0de0b9bd25.slice - libcontainer container kubepods-burstable-pod925c32f5_58fb_421f_aeb4_ec0de0b9bd25.slice. Apr 17 23:56:28.571275 kubelet[2508]: I0417 23:56:28.571186 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-etc-cni-netd\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.571275 kubelet[2508]: I0417 23:56:28.571256 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-clustermesh-secrets\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.571275 kubelet[2508]: I0417 23:56:28.571281 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4365a1-677b-41ff-ad77-a105ec9cde43-xtables-lock\") pod \"kube-proxy-f7dph\" (UID: \"6a4365a1-677b-41ff-ad77-a105ec9cde43\") " pod="kube-system/kube-proxy-f7dph" Apr 17 23:56:28.571275 kubelet[2508]: I0417 23:56:28.571304 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-bpf-maps\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.571987 kubelet[2508]: I0417 23:56:28.571321 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-xtables-lock\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.571987 kubelet[2508]: I0417 23:56:28.571339 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-host-proc-sys-kernel\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.571987 kubelet[2508]: I0417 23:56:28.571373 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c97pc\" (UniqueName: \"kubernetes.io/projected/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-kube-api-access-c97pc\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.571987 kubelet[2508]: I0417 23:56:28.571426 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a4365a1-677b-41ff-ad77-a105ec9cde43-kube-proxy\") pod \"kube-proxy-f7dph\" (UID: \"6a4365a1-677b-41ff-ad77-a105ec9cde43\") " pod="kube-system/kube-proxy-f7dph" Apr 17 23:56:28.571987 kubelet[2508]: I0417 23:56:28.571503 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-hostproc\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.571987 kubelet[2508]: I0417 23:56:28.571532 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-cgroup\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.572088 kubelet[2508]: I0417 23:56:28.571551 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-hubble-tls\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.572088 kubelet[2508]: I0417 23:56:28.571572 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlwjq\" (UniqueName: \"kubernetes.io/projected/6a4365a1-677b-41ff-ad77-a105ec9cde43-kube-api-access-nlwjq\") pod \"kube-proxy-f7dph\" (UID: \"6a4365a1-677b-41ff-ad77-a105ec9cde43\") " pod="kube-system/kube-proxy-f7dph" Apr 17 23:56:28.572088 kubelet[2508]: I0417 23:56:28.571591 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-run\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.572088 kubelet[2508]: I0417 23:56:28.571607 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-lib-modules\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.572088 kubelet[2508]: I0417 23:56:28.571624 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-config-path\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.572088 kubelet[2508]: I0417 23:56:28.571640 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-host-proc-sys-net\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.572179 kubelet[2508]: I0417 23:56:28.571737 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4365a1-677b-41ff-ad77-a105ec9cde43-lib-modules\") pod \"kube-proxy-f7dph\" (UID: \"6a4365a1-677b-41ff-ad77-a105ec9cde43\") " pod="kube-system/kube-proxy-f7dph" Apr 17 23:56:28.572179 kubelet[2508]: I0417 23:56:28.571759 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cni-path\") pod \"cilium-6fh7d\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " pod="kube-system/cilium-6fh7d" Apr 17 23:56:28.668285 systemd[1]: Created slice kubepods-besteffort-pod1f4818ae_1272_408c_9941_3d075c787340.slice - libcontainer container kubepods-besteffort-pod1f4818ae_1272_408c_9941_3d075c787340.slice. Apr 17 23:56:28.711393 kubelet[2508]: E0417 23:56:28.711224 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:28.777196 kubelet[2508]: I0417 23:56:28.777006 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f4818ae-1272-408c-9941-3d075c787340-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-qf4lt\" (UID: \"1f4818ae-1272-408c-9941-3d075c787340\") " pod="kube-system/cilium-operator-6f9c7c5859-qf4lt" Apr 17 23:56:28.777196 kubelet[2508]: I0417 23:56:28.777031 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skm6n\" (UniqueName: \"kubernetes.io/projected/1f4818ae-1272-408c-9941-3d075c787340-kube-api-access-skm6n\") pod \"cilium-operator-6f9c7c5859-qf4lt\" (UID: \"1f4818ae-1272-408c-9941-3d075c787340\") " pod="kube-system/cilium-operator-6f9c7c5859-qf4lt" Apr 17 23:56:28.810636 kubelet[2508]: E0417 23:56:28.810541 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:28.811431 containerd[1468]: time="2026-04-17T23:56:28.811382932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f7dph,Uid:6a4365a1-677b-41ff-ad77-a105ec9cde43,Namespace:kube-system,Attempt:0,}" Apr 17 23:56:28.819995 kubelet[2508]: E0417 23:56:28.819922 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:28.820577 containerd[1468]: time="2026-04-17T23:56:28.820518079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6fh7d,Uid:925c32f5-58fb-421f-aeb4-ec0de0b9bd25,Namespace:kube-system,Attempt:0,}" Apr 17 23:56:28.843077 containerd[1468]: time="2026-04-17T23:56:28.842836728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:56:28.843077 containerd[1468]: time="2026-04-17T23:56:28.842927617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:56:28.843077 containerd[1468]: time="2026-04-17T23:56:28.842935835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:28.843077 containerd[1468]: time="2026-04-17T23:56:28.842996538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:28.852418 containerd[1468]: time="2026-04-17T23:56:28.852138558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:56:28.852418 containerd[1468]: time="2026-04-17T23:56:28.852195998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:56:28.852418 containerd[1468]: time="2026-04-17T23:56:28.852214282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:28.852418 containerd[1468]: time="2026-04-17T23:56:28.852288900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:28.864177 systemd[1]: Started cri-containerd-5793ce52e3b6f86436986a71b625c120902c7049bf14ee42dd9d7027bfd22237.scope - libcontainer container 5793ce52e3b6f86436986a71b625c120902c7049bf14ee42dd9d7027bfd22237. Apr 17 23:56:28.870633 systemd[1]: Started cri-containerd-11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686.scope - libcontainer container 11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686. Apr 17 23:56:28.897706 containerd[1468]: time="2026-04-17T23:56:28.897380375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f7dph,Uid:6a4365a1-677b-41ff-ad77-a105ec9cde43,Namespace:kube-system,Attempt:0,} returns sandbox id \"5793ce52e3b6f86436986a71b625c120902c7049bf14ee42dd9d7027bfd22237\"" Apr 17 23:56:28.898928 kubelet[2508]: E0417 23:56:28.898880 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:28.908549 containerd[1468]: time="2026-04-17T23:56:28.908453589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6fh7d,Uid:925c32f5-58fb-421f-aeb4-ec0de0b9bd25,Namespace:kube-system,Attempt:0,} returns sandbox id \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\"" Apr 17 23:56:28.908976 containerd[1468]: time="2026-04-17T23:56:28.908933810Z" level=info msg="CreateContainer within sandbox \"5793ce52e3b6f86436986a71b625c120902c7049bf14ee42dd9d7027bfd22237\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:56:28.909626 kubelet[2508]: E0417 23:56:28.909606 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:28.911420 containerd[1468]: time="2026-04-17T23:56:28.911349401Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 17 23:56:28.931344 containerd[1468]: time="2026-04-17T23:56:28.931240813Z" level=info msg="CreateContainer within sandbox \"5793ce52e3b6f86436986a71b625c120902c7049bf14ee42dd9d7027bfd22237\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f346adb38b71fc962093cab663ded8782f36566c841f1b7eacb0c106b28a645a\"" Apr 17 23:56:28.932401 containerd[1468]: time="2026-04-17T23:56:28.932313068Z" level=info msg="StartContainer for \"f346adb38b71fc962093cab663ded8782f36566c841f1b7eacb0c106b28a645a\"" Apr 17 23:56:28.972069 systemd[1]: Started cri-containerd-f346adb38b71fc962093cab663ded8782f36566c841f1b7eacb0c106b28a645a.scope - libcontainer container f346adb38b71fc962093cab663ded8782f36566c841f1b7eacb0c106b28a645a. Apr 17 23:56:28.976167 kubelet[2508]: E0417 23:56:28.976107 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:28.979835 containerd[1468]: time="2026-04-17T23:56:28.979423959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-qf4lt,Uid:1f4818ae-1272-408c-9941-3d075c787340,Namespace:kube-system,Attempt:0,}" Apr 17 23:56:29.009633 containerd[1468]: time="2026-04-17T23:56:29.008646769Z" level=info msg="StartContainer for \"f346adb38b71fc962093cab663ded8782f36566c841f1b7eacb0c106b28a645a\" returns successfully" Apr 17 23:56:29.018198 containerd[1468]: time="2026-04-17T23:56:29.017222402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:56:29.018198 containerd[1468]: time="2026-04-17T23:56:29.017340599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:56:29.018198 containerd[1468]: time="2026-04-17T23:56:29.017390647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:29.018198 containerd[1468]: time="2026-04-17T23:56:29.017609557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:29.039922 systemd[1]: Started cri-containerd-9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125.scope - libcontainer container 9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125. Apr 17 23:56:29.081350 containerd[1468]: time="2026-04-17T23:56:29.081227128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-qf4lt,Uid:1f4818ae-1272-408c-9941-3d075c787340,Namespace:kube-system,Attempt:0,} returns sandbox id \"9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125\"" Apr 17 23:56:29.082960 kubelet[2508]: E0417 23:56:29.082314 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:29.716041 kubelet[2508]: E0417 23:56:29.716010 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:29.716041 kubelet[2508]: E0417 23:56:29.716007 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:29.725389 kubelet[2508]: I0417 23:56:29.725282 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f7dph" podStartSLOduration=1.725265394 podStartE2EDuration="1.725265394s" podCreationTimestamp="2026-04-17 23:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:56:29.725028233 +0000 UTC m=+9.133324576" watchObservedRunningTime="2026-04-17 23:56:29.725265394 +0000 UTC m=+9.133561748" Apr 17 23:56:32.693057 kubelet[2508]: E0417 23:56:32.693018 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:34.139532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801329620.mount: Deactivated successfully. Apr 17 23:56:35.408837 containerd[1468]: time="2026-04-17T23:56:35.408749827Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:35.409548 containerd[1468]: time="2026-04-17T23:56:35.409496882Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 17 23:56:35.410800 containerd[1468]: time="2026-04-17T23:56:35.410727005Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:35.412457 containerd[1468]: time="2026-04-17T23:56:35.412389292Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.500972033s" Apr 17 23:56:35.412457 containerd[1468]: time="2026-04-17T23:56:35.412452618Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 17 23:56:35.413829 containerd[1468]: time="2026-04-17T23:56:35.413803837Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 17 23:56:35.417123 containerd[1468]: time="2026-04-17T23:56:35.417050624Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:56:35.431480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660174496.mount: Deactivated successfully. Apr 17 23:56:35.434739 containerd[1468]: time="2026-04-17T23:56:35.434567579Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\"" Apr 17 23:56:35.435214 containerd[1468]: time="2026-04-17T23:56:35.435197152Z" level=info msg="StartContainer for \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\"" Apr 17 23:56:35.468016 systemd[1]: Started cri-containerd-fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff.scope - libcontainer container fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff. Apr 17 23:56:35.528593 containerd[1468]: time="2026-04-17T23:56:35.528527017Z" level=info msg="StartContainer for \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\" returns successfully" Apr 17 23:56:35.537192 systemd[1]: cri-containerd-fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff.scope: Deactivated successfully. Apr 17 23:56:35.584919 containerd[1468]: time="2026-04-17T23:56:35.582754389Z" level=info msg="shim disconnected" id=fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff namespace=k8s.io Apr 17 23:56:35.584919 containerd[1468]: time="2026-04-17T23:56:35.584893862Z" level=warning msg="cleaning up after shim disconnected" id=fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff namespace=k8s.io Apr 17 23:56:35.584919 containerd[1468]: time="2026-04-17T23:56:35.584907576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:56:35.731482 kubelet[2508]: E0417 23:56:35.731278 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:35.737145 containerd[1468]: time="2026-04-17T23:56:35.736995878Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:56:35.752148 containerd[1468]: time="2026-04-17T23:56:35.752107698Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\"" Apr 17 23:56:35.753000 containerd[1468]: time="2026-04-17T23:56:35.752958772Z" level=info msg="StartContainer for \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\"" Apr 17 23:56:35.776860 systemd[1]: Started cri-containerd-23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d.scope - libcontainer container 23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d. Apr 17 23:56:35.798310 containerd[1468]: time="2026-04-17T23:56:35.798246612Z" level=info msg="StartContainer for \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\" returns successfully" Apr 17 23:56:35.807430 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:56:35.807598 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:56:35.807710 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:56:35.814337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:56:35.814546 systemd[1]: cri-containerd-23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d.scope: Deactivated successfully. Apr 17 23:56:35.831151 containerd[1468]: time="2026-04-17T23:56:35.830966585Z" level=info msg="shim disconnected" id=23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d namespace=k8s.io Apr 17 23:56:35.831151 containerd[1468]: time="2026-04-17T23:56:35.831021971Z" level=warning msg="cleaning up after shim disconnected" id=23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d namespace=k8s.io Apr 17 23:56:35.831151 containerd[1468]: time="2026-04-17T23:56:35.831029249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:56:35.837268 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:56:36.429316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff-rootfs.mount: Deactivated successfully. Apr 17 23:56:36.737866 kubelet[2508]: E0417 23:56:36.737044 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:36.759779 containerd[1468]: time="2026-04-17T23:56:36.759578754Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:56:36.788984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576004340.mount: Deactivated successfully. Apr 17 23:56:36.798236 containerd[1468]: time="2026-04-17T23:56:36.798175328Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\"" Apr 17 23:56:36.799315 containerd[1468]: time="2026-04-17T23:56:36.799196219Z" level=info msg="StartContainer for \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\"" Apr 17 23:56:36.835998 systemd[1]: Started cri-containerd-be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9.scope - libcontainer container be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9. Apr 17 23:56:36.862703 containerd[1468]: time="2026-04-17T23:56:36.862599439Z" level=info msg="StartContainer for \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\" returns successfully" Apr 17 23:56:36.863473 systemd[1]: cri-containerd-be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9.scope: Deactivated successfully. Apr 17 23:56:36.896181 containerd[1468]: time="2026-04-17T23:56:36.896008532Z" level=info msg="shim disconnected" id=be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9 namespace=k8s.io Apr 17 23:56:36.896181 containerd[1468]: time="2026-04-17T23:56:36.896063682Z" level=warning msg="cleaning up after shim disconnected" id=be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9 namespace=k8s.io Apr 17 23:56:36.896181 containerd[1468]: time="2026-04-17T23:56:36.896070809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:56:36.909331 containerd[1468]: time="2026-04-17T23:56:36.908364364Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:56:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:56:37.097974 containerd[1468]: time="2026-04-17T23:56:37.097835097Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:37.099011 containerd[1468]: time="2026-04-17T23:56:37.098950358Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 17 23:56:37.099970 containerd[1468]: time="2026-04-17T23:56:37.099920149Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:56:37.100871 containerd[1468]: time="2026-04-17T23:56:37.100834288Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.687001094s" Apr 17 23:56:37.100904 containerd[1468]: time="2026-04-17T23:56:37.100869684Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 17 23:56:37.104913 containerd[1468]: time="2026-04-17T23:56:37.104870098Z" level=info msg="CreateContainer within sandbox \"9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 17 23:56:37.116914 containerd[1468]: time="2026-04-17T23:56:37.116833843Z" level=info msg="CreateContainer within sandbox \"9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\"" Apr 17 23:56:37.117430 containerd[1468]: time="2026-04-17T23:56:37.117375107Z" level=info msg="StartContainer for \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\"" Apr 17 23:56:37.148947 systemd[1]: Started cri-containerd-d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b.scope - libcontainer container d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b. Apr 17 23:56:37.186272 containerd[1468]: time="2026-04-17T23:56:37.186125704Z" level=info msg="StartContainer for \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\" returns successfully" Apr 17 23:56:37.430075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9-rootfs.mount: Deactivated successfully. Apr 17 23:56:37.741788 kubelet[2508]: E0417 23:56:37.740559 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:37.743571 kubelet[2508]: E0417 23:56:37.743522 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:37.749205 containerd[1468]: time="2026-04-17T23:56:37.749139011Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:56:37.751033 kubelet[2508]: I0417 23:56:37.750985 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-qf4lt" podStartSLOduration=1.732383614 podStartE2EDuration="9.750971419s" podCreationTimestamp="2026-04-17 23:56:28 +0000 UTC" firstStartedPulling="2026-04-17 23:56:29.082985623 +0000 UTC m=+8.491281977" lastFinishedPulling="2026-04-17 23:56:37.101573439 +0000 UTC m=+16.509869782" observedRunningTime="2026-04-17 23:56:37.749523883 +0000 UTC m=+17.157820227" watchObservedRunningTime="2026-04-17 23:56:37.750971419 +0000 UTC m=+17.159267773" Apr 17 23:56:37.770471 containerd[1468]: time="2026-04-17T23:56:37.770401530Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\"" Apr 17 23:56:37.771220 containerd[1468]: time="2026-04-17T23:56:37.771184480Z" level=info msg="StartContainer for \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\"" Apr 17 23:56:37.819948 systemd[1]: Started cri-containerd-7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f.scope - libcontainer container 7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f. Apr 17 23:56:37.851869 systemd[1]: cri-containerd-7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f.scope: Deactivated successfully. Apr 17 23:56:37.854880 containerd[1468]: time="2026-04-17T23:56:37.854831189Z" level=info msg="StartContainer for \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\" returns successfully" Apr 17 23:56:37.876158 containerd[1468]: time="2026-04-17T23:56:37.876088495Z" level=info msg="shim disconnected" id=7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f namespace=k8s.io Apr 17 23:56:37.876158 containerd[1468]: time="2026-04-17T23:56:37.876147879Z" level=warning msg="cleaning up after shim disconnected" id=7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f namespace=k8s.io Apr 17 23:56:37.876158 containerd[1468]: time="2026-04-17T23:56:37.876155358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:56:38.429452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f-rootfs.mount: Deactivated successfully. Apr 17 23:56:38.749040 kubelet[2508]: E0417 23:56:38.748789 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:38.749040 kubelet[2508]: E0417 23:56:38.748844 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:38.754333 containerd[1468]: time="2026-04-17T23:56:38.754280949Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:56:38.773146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468666162.mount: Deactivated successfully. Apr 17 23:56:38.774704 containerd[1468]: time="2026-04-17T23:56:38.774595321Z" level=info msg="CreateContainer within sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\"" Apr 17 23:56:38.775037 containerd[1468]: time="2026-04-17T23:56:38.774997578Z" level=info msg="StartContainer for \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\"" Apr 17 23:56:38.810810 systemd[1]: Started cri-containerd-a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673.scope - libcontainer container a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673. Apr 17 23:56:38.838108 containerd[1468]: time="2026-04-17T23:56:38.838035257Z" level=info msg="StartContainer for \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\" returns successfully" Apr 17 23:56:38.979103 kubelet[2508]: I0417 23:56:38.979062 2508 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 17 23:56:39.017105 systemd[1]: Created slice kubepods-burstable-podbe9f5165_a4d6_4681_a43e_5dff581646c1.slice - libcontainer container kubepods-burstable-podbe9f5165_a4d6_4681_a43e_5dff581646c1.slice. Apr 17 23:56:39.023580 systemd[1]: Created slice kubepods-burstable-podadf95be6_49bf_4313_9331_dadbee756dcb.slice - libcontainer container kubepods-burstable-podadf95be6_49bf_4313_9331_dadbee756dcb.slice. Apr 17 23:56:39.154112 kubelet[2508]: I0417 23:56:39.153984 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqsxd\" (UniqueName: \"kubernetes.io/projected/be9f5165-a4d6-4681-a43e-5dff581646c1-kube-api-access-nqsxd\") pod \"coredns-66bc5c9577-tq9pf\" (UID: \"be9f5165-a4d6-4681-a43e-5dff581646c1\") " pod="kube-system/coredns-66bc5c9577-tq9pf" Apr 17 23:56:39.154112 kubelet[2508]: I0417 23:56:39.154058 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9f5165-a4d6-4681-a43e-5dff581646c1-config-volume\") pod \"coredns-66bc5c9577-tq9pf\" (UID: \"be9f5165-a4d6-4681-a43e-5dff581646c1\") " pod="kube-system/coredns-66bc5c9577-tq9pf" Apr 17 23:56:39.154112 kubelet[2508]: I0417 23:56:39.154080 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfchv\" (UniqueName: \"kubernetes.io/projected/adf95be6-49bf-4313-9331-dadbee756dcb-kube-api-access-kfchv\") pod \"coredns-66bc5c9577-lv7hn\" (UID: \"adf95be6-49bf-4313-9331-dadbee756dcb\") " pod="kube-system/coredns-66bc5c9577-lv7hn" Apr 17 23:56:39.154112 kubelet[2508]: I0417 23:56:39.154097 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf95be6-49bf-4313-9331-dadbee756dcb-config-volume\") pod \"coredns-66bc5c9577-lv7hn\" (UID: \"adf95be6-49bf-4313-9331-dadbee756dcb\") " pod="kube-system/coredns-66bc5c9577-lv7hn" Apr 17 23:56:39.325948 kubelet[2508]: E0417 23:56:39.325276 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:39.328325 kubelet[2508]: E0417 23:56:39.328277 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:39.348617 containerd[1468]: time="2026-04-17T23:56:39.348513638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lv7hn,Uid:adf95be6-49bf-4313-9331-dadbee756dcb,Namespace:kube-system,Attempt:0,}" Apr 17 23:56:39.361959 containerd[1468]: time="2026-04-17T23:56:39.361875880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tq9pf,Uid:be9f5165-a4d6-4681-a43e-5dff581646c1,Namespace:kube-system,Attempt:0,}" Apr 17 23:56:39.774224 kubelet[2508]: E0417 23:56:39.774106 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:39.791425 kubelet[2508]: I0417 23:56:39.790848 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6fh7d" podStartSLOduration=5.288007956 podStartE2EDuration="11.790832727s" podCreationTimestamp="2026-04-17 23:56:28 +0000 UTC" firstStartedPulling="2026-04-17 23:56:28.91058388 +0000 UTC m=+8.318880237" lastFinishedPulling="2026-04-17 23:56:35.413408661 +0000 UTC m=+14.821705008" observedRunningTime="2026-04-17 23:56:39.790254581 +0000 UTC m=+19.198550933" watchObservedRunningTime="2026-04-17 23:56:39.790832727 +0000 UTC m=+19.199129089" Apr 17 23:56:40.738758 systemd-networkd[1391]: cilium_host: Link UP Apr 17 23:56:40.738855 systemd-networkd[1391]: cilium_net: Link UP Apr 17 23:56:40.738857 systemd-networkd[1391]: cilium_net: Gained carrier Apr 17 23:56:40.739903 systemd-networkd[1391]: cilium_host: Gained carrier Apr 17 23:56:40.777142 kubelet[2508]: E0417 23:56:40.776121 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:40.837173 systemd-networkd[1391]: cilium_vxlan: Link UP Apr 17 23:56:40.837179 systemd-networkd[1391]: cilium_vxlan: Gained carrier Apr 17 23:56:41.031711 kernel: NET: Registered PF_ALG protocol family Apr 17 23:56:41.037829 systemd-networkd[1391]: cilium_host: Gained IPv6LL Apr 17 23:56:41.577250 systemd-networkd[1391]: lxc_health: Link UP Apr 17 23:56:41.584116 systemd-networkd[1391]: lxc_health: Gained carrier Apr 17 23:56:41.661906 systemd-networkd[1391]: cilium_net: Gained IPv6LL Apr 17 23:56:41.778333 kubelet[2508]: E0417 23:56:41.778245 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:41.991442 systemd-networkd[1391]: lxc4f09155cdabc: Link UP Apr 17 23:56:42.000197 systemd-networkd[1391]: lxc8d46afdf5886: Link UP Apr 17 23:56:42.010695 kernel: eth0: renamed from tmpeaa93 Apr 17 23:56:42.015686 kernel: eth0: renamed from tmpa9786 Apr 17 23:56:42.019367 systemd-networkd[1391]: lxc4f09155cdabc: Gained carrier Apr 17 23:56:42.019528 systemd-networkd[1391]: lxc8d46afdf5886: Gained carrier Apr 17 23:56:42.819395 kubelet[2508]: E0417 23:56:42.818981 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:42.877972 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Apr 17 23:56:42.891139 update_engine[1453]: I20260417 23:56:42.891016 1453 update_attempter.cc:509] Updating boot flags... Apr 17 23:56:42.958004 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3295) Apr 17 23:56:42.988929 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3295) Apr 17 23:56:43.197913 systemd-networkd[1391]: lxc_health: Gained IPv6LL Apr 17 23:56:43.518011 systemd-networkd[1391]: lxc4f09155cdabc: Gained IPv6LL Apr 17 23:56:43.518561 systemd-networkd[1391]: lxc8d46afdf5886: Gained IPv6LL Apr 17 23:56:45.472841 containerd[1468]: time="2026-04-17T23:56:45.472406804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:56:45.472841 containerd[1468]: time="2026-04-17T23:56:45.472830783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:56:45.473297 containerd[1468]: time="2026-04-17T23:56:45.472862826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:45.473297 containerd[1468]: time="2026-04-17T23:56:45.473056302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:45.476442 containerd[1468]: time="2026-04-17T23:56:45.476049962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:56:45.476442 containerd[1468]: time="2026-04-17T23:56:45.476102066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:56:45.476442 containerd[1468]: time="2026-04-17T23:56:45.476123847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:45.476442 containerd[1468]: time="2026-04-17T23:56:45.476213469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:56:45.506891 systemd[1]: Started cri-containerd-eaa9328421a59cb6c5f36d1cc4b5d8f034c64a65f78d7ab78f7a830333725834.scope - libcontainer container eaa9328421a59cb6c5f36d1cc4b5d8f034c64a65f78d7ab78f7a830333725834. Apr 17 23:56:45.510446 systemd[1]: Started cri-containerd-a978671ff95293624df0350ca55b269fc137a5a147aa6f93b769f00db57582f4.scope - libcontainer container a978671ff95293624df0350ca55b269fc137a5a147aa6f93b769f00db57582f4. Apr 17 23:56:45.518365 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:56:45.520202 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:56:45.546537 containerd[1468]: time="2026-04-17T23:56:45.546462992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tq9pf,Uid:be9f5165-a4d6-4681-a43e-5dff581646c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaa9328421a59cb6c5f36d1cc4b5d8f034c64a65f78d7ab78f7a830333725834\"" Apr 17 23:56:45.547730 containerd[1468]: time="2026-04-17T23:56:45.547576798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lv7hn,Uid:adf95be6-49bf-4313-9331-dadbee756dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a978671ff95293624df0350ca55b269fc137a5a147aa6f93b769f00db57582f4\"" Apr 17 23:56:45.547948 kubelet[2508]: E0417 23:56:45.547924 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:45.548867 kubelet[2508]: E0417 23:56:45.548736 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:45.552591 containerd[1468]: time="2026-04-17T23:56:45.552547519Z" level=info msg="CreateContainer within sandbox \"eaa9328421a59cb6c5f36d1cc4b5d8f034c64a65f78d7ab78f7a830333725834\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:56:45.556084 containerd[1468]: time="2026-04-17T23:56:45.556042417Z" level=info msg="CreateContainer within sandbox \"a978671ff95293624df0350ca55b269fc137a5a147aa6f93b769f00db57582f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:56:45.568507 containerd[1468]: time="2026-04-17T23:56:45.568437836Z" level=info msg="CreateContainer within sandbox \"eaa9328421a59cb6c5f36d1cc4b5d8f034c64a65f78d7ab78f7a830333725834\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a610132ef57ddc1c409ddffccce8efe2a21037c6e9622342e43a524164789b28\"" Apr 17 23:56:45.569165 containerd[1468]: time="2026-04-17T23:56:45.569112073Z" level=info msg="StartContainer for \"a610132ef57ddc1c409ddffccce8efe2a21037c6e9622342e43a524164789b28\"" Apr 17 23:56:45.575165 containerd[1468]: time="2026-04-17T23:56:45.574604748Z" level=info msg="CreateContainer within sandbox \"a978671ff95293624df0350ca55b269fc137a5a147aa6f93b769f00db57582f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baef50a6898a0d20877fe4b2e7d34a962ed7d2cf575577f0336ebb31fd72d7d3\"" Apr 17 23:56:45.576389 containerd[1468]: time="2026-04-17T23:56:45.576358071Z" level=info msg="StartContainer for \"baef50a6898a0d20877fe4b2e7d34a962ed7d2cf575577f0336ebb31fd72d7d3\"" Apr 17 23:56:45.598898 systemd[1]: Started cri-containerd-a610132ef57ddc1c409ddffccce8efe2a21037c6e9622342e43a524164789b28.scope - libcontainer container a610132ef57ddc1c409ddffccce8efe2a21037c6e9622342e43a524164789b28. Apr 17 23:56:45.603036 systemd[1]: Started cri-containerd-baef50a6898a0d20877fe4b2e7d34a962ed7d2cf575577f0336ebb31fd72d7d3.scope - libcontainer container baef50a6898a0d20877fe4b2e7d34a962ed7d2cf575577f0336ebb31fd72d7d3. Apr 17 23:56:45.624051 containerd[1468]: time="2026-04-17T23:56:45.624011064Z" level=info msg="StartContainer for \"a610132ef57ddc1c409ddffccce8efe2a21037c6e9622342e43a524164789b28\" returns successfully" Apr 17 23:56:45.630190 containerd[1468]: time="2026-04-17T23:56:45.630118035Z" level=info msg="StartContainer for \"baef50a6898a0d20877fe4b2e7d34a962ed7d2cf575577f0336ebb31fd72d7d3\" returns successfully" Apr 17 23:56:45.789682 kubelet[2508]: E0417 23:56:45.789475 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:45.792124 kubelet[2508]: E0417 23:56:45.792084 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:45.819367 kubelet[2508]: I0417 23:56:45.819314 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tq9pf" podStartSLOduration=17.819300317 podStartE2EDuration="17.819300317s" podCreationTimestamp="2026-04-17 23:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:56:45.815308124 +0000 UTC m=+25.223604471" watchObservedRunningTime="2026-04-17 23:56:45.819300317 +0000 UTC m=+25.227596671" Apr 17 23:56:45.819533 kubelet[2508]: I0417 23:56:45.819415 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lv7hn" podStartSLOduration=17.819411365 podStartE2EDuration="17.819411365s" podCreationTimestamp="2026-04-17 23:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:56:45.803145264 +0000 UTC m=+25.211441611" watchObservedRunningTime="2026-04-17 23:56:45.819411365 +0000 UTC m=+25.227707728" Apr 17 23:56:46.528298 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:40088.service - OpenSSH per-connection server daemon (10.0.0.1:40088). Apr 17 23:56:46.566941 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 40088 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:46.568291 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:46.571849 systemd-logind[1449]: New session 8 of user core. Apr 17 23:56:46.585970 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:56:46.728697 sshd[3905]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:46.731846 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:40088.service: Deactivated successfully. Apr 17 23:56:46.733455 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:56:46.734364 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:56:46.735463 systemd-logind[1449]: Removed session 8. Apr 17 23:56:46.794186 kubelet[2508]: E0417 23:56:46.793997 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:46.794186 kubelet[2508]: E0417 23:56:46.794006 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:47.796401 kubelet[2508]: E0417 23:56:47.796336 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:47.900399 kubelet[2508]: I0417 23:56:47.900317 2508 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:56:47.900836 kubelet[2508]: E0417 23:56:47.900808 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:48.798643 kubelet[2508]: E0417 23:56:48.798560 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:56:48.949278 kernel: hrtimer: interrupt took 8782066 ns Apr 17 23:56:51.742816 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:60812.service - OpenSSH per-connection server daemon (10.0.0.1:60812). Apr 17 23:56:51.777239 sshd[3931]: Accepted publickey for core from 10.0.0.1 port 60812 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:51.778576 sshd[3931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:51.782830 systemd-logind[1449]: New session 9 of user core. Apr 17 23:56:51.788930 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:56:51.892054 sshd[3931]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:51.895441 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:60812.service: Deactivated successfully. Apr 17 23:56:51.896842 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:56:51.897444 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:56:51.898952 systemd-logind[1449]: Removed session 9. Apr 17 23:56:56.903341 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:60818.service - OpenSSH per-connection server daemon (10.0.0.1:60818). Apr 17 23:56:56.937897 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 60818 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:56:56.938966 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:56:56.942518 systemd-logind[1449]: New session 10 of user core. Apr 17 23:56:56.946860 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:56:57.052829 sshd[3947]: pam_unix(sshd:session): session closed for user core Apr 17 23:56:57.056233 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:60818.service: Deactivated successfully. Apr 17 23:56:57.058033 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:56:57.058580 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:56:57.059617 systemd-logind[1449]: Removed session 10. Apr 17 23:57:02.066866 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:55134.service - OpenSSH per-connection server daemon (10.0.0.1:55134). Apr 17 23:57:02.101373 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 55134 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:02.102781 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:02.108582 systemd-logind[1449]: New session 11 of user core. Apr 17 23:57:02.118978 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:57:02.229584 sshd[3964]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:02.239880 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:55134.service: Deactivated successfully. Apr 17 23:57:02.241638 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:57:02.243455 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:57:02.249238 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:55142.service - OpenSSH per-connection server daemon (10.0.0.1:55142). Apr 17 23:57:02.250563 systemd-logind[1449]: Removed session 11. Apr 17 23:57:02.280273 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 55142 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:02.281443 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:02.285699 systemd-logind[1449]: New session 12 of user core. Apr 17 23:57:02.295985 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:57:02.443977 sshd[3980]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:02.459381 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:55142.service: Deactivated successfully. Apr 17 23:57:02.462273 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:57:02.465356 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:57:02.484139 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:55158.service - OpenSSH per-connection server daemon (10.0.0.1:55158). Apr 17 23:57:02.485157 systemd-logind[1449]: Removed session 12. Apr 17 23:57:02.523454 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 55158 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:02.525107 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:02.528840 systemd-logind[1449]: New session 13 of user core. Apr 17 23:57:02.534951 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:57:02.646526 sshd[3992]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:02.650129 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:55158.service: Deactivated successfully. Apr 17 23:57:02.651757 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:57:02.652968 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:57:02.654072 systemd-logind[1449]: Removed session 13. Apr 17 23:57:07.658459 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:55162.service - OpenSSH per-connection server daemon (10.0.0.1:55162). Apr 17 23:57:07.695126 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 55162 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:07.696490 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:07.700433 systemd-logind[1449]: New session 14 of user core. Apr 17 23:57:07.706981 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:57:07.806823 sshd[4007]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:07.809480 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:55162.service: Deactivated successfully. Apr 17 23:57:07.810797 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:57:07.811360 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:57:07.812172 systemd-logind[1449]: Removed session 14. Apr 17 23:57:12.818079 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:58928.service - OpenSSH per-connection server daemon (10.0.0.1:58928). Apr 17 23:57:12.852300 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 58928 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:12.853521 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:12.857095 systemd-logind[1449]: New session 15 of user core. Apr 17 23:57:12.868873 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:57:12.975347 sshd[4021]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:12.985222 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:58928.service: Deactivated successfully. Apr 17 23:57:12.986556 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:57:12.987781 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:57:12.993130 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:58940.service - OpenSSH per-connection server daemon (10.0.0.1:58940). Apr 17 23:57:12.994114 systemd-logind[1449]: Removed session 15. Apr 17 23:57:13.028034 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 58940 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:13.029279 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:13.032978 systemd-logind[1449]: New session 16 of user core. Apr 17 23:57:13.047921 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:57:13.206858 sshd[4035]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:13.218802 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:58940.service: Deactivated successfully. Apr 17 23:57:13.220291 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:57:13.221437 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:57:13.222487 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:58954.service - OpenSSH per-connection server daemon (10.0.0.1:58954). Apr 17 23:57:13.223089 systemd-logind[1449]: Removed session 16. Apr 17 23:57:13.260166 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 58954 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:13.261379 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:13.265417 systemd-logind[1449]: New session 17 of user core. Apr 17 23:57:13.272840 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:57:13.654433 sshd[4047]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:13.660121 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:58954.service: Deactivated successfully. Apr 17 23:57:13.662216 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:57:13.663760 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:57:13.669075 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:58962.service - OpenSSH per-connection server daemon (10.0.0.1:58962). Apr 17 23:57:13.670105 systemd-logind[1449]: Removed session 17. Apr 17 23:57:13.702999 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 58962 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:13.704288 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:13.707887 systemd-logind[1449]: New session 18 of user core. Apr 17 23:57:13.717829 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:57:13.926365 sshd[4065]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:13.935339 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:58962.service: Deactivated successfully. Apr 17 23:57:13.937845 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:57:13.939517 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:57:13.948004 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:58970.service - OpenSSH per-connection server daemon (10.0.0.1:58970). Apr 17 23:57:13.948742 systemd-logind[1449]: Removed session 18. Apr 17 23:57:13.979217 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 58970 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:13.980411 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:13.985396 systemd-logind[1449]: New session 19 of user core. Apr 17 23:57:13.997886 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:57:14.097325 sshd[4078]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:14.100064 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:58970.service: Deactivated successfully. Apr 17 23:57:14.101370 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:57:14.102030 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:57:14.102886 systemd-logind[1449]: Removed session 19. Apr 17 23:57:19.109433 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:59012.service - OpenSSH per-connection server daemon (10.0.0.1:59012). Apr 17 23:57:19.144557 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 59012 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:19.145807 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:19.149264 systemd-logind[1449]: New session 20 of user core. Apr 17 23:57:19.162859 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:57:19.261844 sshd[4097]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:19.264770 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:59012.service: Deactivated successfully. Apr 17 23:57:19.266446 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:57:19.267158 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:57:19.268002 systemd-logind[1449]: Removed session 20. Apr 17 23:57:24.273575 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:50304.service - OpenSSH per-connection server daemon (10.0.0.1:50304). Apr 17 23:57:24.310476 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 50304 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:24.311992 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:24.316055 systemd-logind[1449]: New session 21 of user core. Apr 17 23:57:24.324254 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:57:24.424847 sshd[4114]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:24.435001 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:50304.service: Deactivated successfully. Apr 17 23:57:24.436355 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:57:24.437394 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:57:24.445013 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:50314.service - OpenSSH per-connection server daemon (10.0.0.1:50314). Apr 17 23:57:24.445829 systemd-logind[1449]: Removed session 21. Apr 17 23:57:24.478346 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 50314 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:24.479872 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:24.485901 systemd-logind[1449]: New session 22 of user core. Apr 17 23:57:24.495018 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:57:25.836802 containerd[1468]: time="2026-04-17T23:57:25.836574144Z" level=info msg="StopContainer for \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\" with timeout 30 (s)" Apr 17 23:57:25.838599 containerd[1468]: time="2026-04-17T23:57:25.838428622Z" level=info msg="Stop container \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\" with signal terminated" Apr 17 23:57:25.858021 systemd[1]: cri-containerd-d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b.scope: Deactivated successfully. Apr 17 23:57:25.867584 containerd[1468]: time="2026-04-17T23:57:25.867537465Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:57:25.879639 containerd[1468]: time="2026-04-17T23:57:25.879346048Z" level=info msg="StopContainer for \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\" with timeout 2 (s)" Apr 17 23:57:25.880134 containerd[1468]: time="2026-04-17T23:57:25.880064073Z" level=info msg="Stop container \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\" with signal terminated" Apr 17 23:57:25.889145 systemd-networkd[1391]: lxc_health: Link DOWN Apr 17 23:57:25.889153 systemd-networkd[1391]: lxc_health: Lost carrier Apr 17 23:57:25.893807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b-rootfs.mount: Deactivated successfully. Apr 17 23:57:25.902012 containerd[1468]: time="2026-04-17T23:57:25.901939300Z" level=info msg="shim disconnected" id=d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b namespace=k8s.io Apr 17 23:57:25.902012 containerd[1468]: time="2026-04-17T23:57:25.902002524Z" level=warning msg="cleaning up after shim disconnected" id=d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b namespace=k8s.io Apr 17 23:57:25.902012 containerd[1468]: time="2026-04-17T23:57:25.902012703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:57:25.913120 systemd[1]: cri-containerd-a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673.scope: Deactivated successfully. Apr 17 23:57:25.913337 systemd[1]: cri-containerd-a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673.scope: Consumed 6.134s CPU time. Apr 17 23:57:25.923063 containerd[1468]: time="2026-04-17T23:57:25.923014250Z" level=info msg="StopContainer for \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\" returns successfully" Apr 17 23:57:25.924102 containerd[1468]: time="2026-04-17T23:57:25.923617983Z" level=info msg="StopPodSandbox for \"9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125\"" Apr 17 23:57:25.924102 containerd[1468]: time="2026-04-17T23:57:25.923708602Z" level=info msg="Container to stop \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:57:25.925790 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125-shm.mount: Deactivated successfully. Apr 17 23:57:25.931975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673-rootfs.mount: Deactivated successfully. Apr 17 23:57:25.932508 systemd[1]: cri-containerd-9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125.scope: Deactivated successfully. Apr 17 23:57:25.954448 containerd[1468]: time="2026-04-17T23:57:25.954397298Z" level=info msg="shim disconnected" id=9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125 namespace=k8s.io Apr 17 23:57:25.954448 containerd[1468]: time="2026-04-17T23:57:25.954442697Z" level=warning msg="cleaning up after shim disconnected" id=9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125 namespace=k8s.io Apr 17 23:57:25.954448 containerd[1468]: time="2026-04-17T23:57:25.954448783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:57:25.955077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125-rootfs.mount: Deactivated successfully. Apr 17 23:57:25.955450 containerd[1468]: time="2026-04-17T23:57:25.955407955Z" level=info msg="shim disconnected" id=a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673 namespace=k8s.io Apr 17 23:57:25.955548 containerd[1468]: time="2026-04-17T23:57:25.955452121Z" level=warning msg="cleaning up after shim disconnected" id=a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673 namespace=k8s.io Apr 17 23:57:25.955548 containerd[1468]: time="2026-04-17T23:57:25.955460961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:57:25.965685 containerd[1468]: time="2026-04-17T23:57:25.965602197Z" level=info msg="TearDown network for sandbox \"9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125\" successfully" Apr 17 23:57:25.966096 containerd[1468]: time="2026-04-17T23:57:25.965711147Z" level=info msg="StopPodSandbox for \"9793b10a48bb07c26de2121e0786f15cba6238fa2bcd9f6394bbbaae99612125\" returns successfully" Apr 17 23:57:25.968880 containerd[1468]: time="2026-04-17T23:57:25.968839125Z" level=info msg="StopContainer for \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\" returns successfully" Apr 17 23:57:25.969342 containerd[1468]: time="2026-04-17T23:57:25.969310825Z" level=info msg="StopPodSandbox for \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\"" Apr 17 23:57:25.969392 containerd[1468]: time="2026-04-17T23:57:25.969347683Z" level=info msg="Container to stop \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:57:25.969392 containerd[1468]: time="2026-04-17T23:57:25.969360242Z" level=info msg="Container to stop \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:57:25.969392 containerd[1468]: time="2026-04-17T23:57:25.969367092Z" level=info msg="Container to stop \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:57:25.969392 containerd[1468]: time="2026-04-17T23:57:25.969373832Z" level=info msg="Container to stop \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:57:25.969392 containerd[1468]: time="2026-04-17T23:57:25.969379853Z" level=info msg="Container to stop \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:57:25.974628 systemd[1]: cri-containerd-11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686.scope: Deactivated successfully. Apr 17 23:57:25.994289 containerd[1468]: time="2026-04-17T23:57:25.994152575Z" level=info msg="shim disconnected" id=11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686 namespace=k8s.io Apr 17 23:57:25.994289 containerd[1468]: time="2026-04-17T23:57:25.994302745Z" level=warning msg="cleaning up after shim disconnected" id=11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686 namespace=k8s.io Apr 17 23:57:25.994780 containerd[1468]: time="2026-04-17T23:57:25.994317456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:57:26.004946 containerd[1468]: time="2026-04-17T23:57:26.004896169Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:57:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:57:26.005746 containerd[1468]: time="2026-04-17T23:57:26.005723638Z" level=info msg="TearDown network for sandbox \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" successfully" Apr 17 23:57:26.005796 containerd[1468]: time="2026-04-17T23:57:26.005746985Z" level=info msg="StopPodSandbox for \"11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686\" returns successfully" Apr 17 23:57:26.037089 kubelet[2508]: I0417 23:57:26.037013 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-run\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037089 kubelet[2508]: I0417 23:57:26.037082 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skm6n\" (UniqueName: \"kubernetes.io/projected/1f4818ae-1272-408c-9941-3d075c787340-kube-api-access-skm6n\") pod \"1f4818ae-1272-408c-9941-3d075c787340\" (UID: \"1f4818ae-1272-408c-9941-3d075c787340\") " Apr 17 23:57:26.037583 kubelet[2508]: I0417 23:57:26.037129 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f4818ae-1272-408c-9941-3d075c787340-cilium-config-path\") pod \"1f4818ae-1272-408c-9941-3d075c787340\" (UID: \"1f4818ae-1272-408c-9941-3d075c787340\") " Apr 17 23:57:26.037583 kubelet[2508]: I0417 23:57:26.037145 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-lib-modules\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037583 kubelet[2508]: I0417 23:57:26.037157 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-hostproc\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037583 kubelet[2508]: I0417 23:57:26.037141 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.037583 kubelet[2508]: I0417 23:57:26.037172 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-config-path\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037583 kubelet[2508]: I0417 23:57:26.037203 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-bpf-maps\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037819 kubelet[2508]: I0417 23:57:26.037219 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-xtables-lock\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037819 kubelet[2508]: I0417 23:57:26.037264 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-cgroup\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037819 kubelet[2508]: I0417 23:57:26.037275 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cni-path\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037819 kubelet[2508]: I0417 23:57:26.037270 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.037819 kubelet[2508]: I0417 23:57:26.037289 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c97pc\" (UniqueName: \"kubernetes.io/projected/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-kube-api-access-c97pc\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037819 kubelet[2508]: I0417 23:57:26.037331 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-hubble-tls\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037921 kubelet[2508]: I0417 23:57:26.037353 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-host-proc-sys-net\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037921 kubelet[2508]: I0417 23:57:26.037368 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-host-proc-sys-kernel\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037921 kubelet[2508]: I0417 23:57:26.037378 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-etc-cni-netd\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037921 kubelet[2508]: I0417 23:57:26.037390 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-clustermesh-secrets\") pod \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\" (UID: \"925c32f5-58fb-421f-aeb4-ec0de0b9bd25\") " Apr 17 23:57:26.037921 kubelet[2508]: I0417 23:57:26.037425 2508 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.037921 kubelet[2508]: I0417 23:57:26.037438 2508 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.038011 kubelet[2508]: I0417 23:57:26.037608 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.038011 kubelet[2508]: I0417 23:57:26.037646 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-hostproc" (OuterVolumeSpecName: "hostproc") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.040231 kubelet[2508]: I0417 23:57:26.040053 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-kube-api-access-c97pc" (OuterVolumeSpecName: "kube-api-access-c97pc") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "kube-api-access-c97pc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:57:26.040231 kubelet[2508]: I0417 23:57:26.040091 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.040231 kubelet[2508]: I0417 23:57:26.040103 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.040231 kubelet[2508]: I0417 23:57:26.040113 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cni-path" (OuterVolumeSpecName: "cni-path") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.040390 kubelet[2508]: I0417 23:57:26.040255 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:57:26.040390 kubelet[2508]: I0417 23:57:26.040343 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.040390 kubelet[2508]: I0417 23:57:26.040358 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.040390 kubelet[2508]: I0417 23:57:26.040370 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:57:26.041031 kubelet[2508]: I0417 23:57:26.040992 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:57:26.042621 kubelet[2508]: I0417 23:57:26.042539 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4818ae-1272-408c-9941-3d075c787340-kube-api-access-skm6n" (OuterVolumeSpecName: "kube-api-access-skm6n") pod "1f4818ae-1272-408c-9941-3d075c787340" (UID: "1f4818ae-1272-408c-9941-3d075c787340"). InnerVolumeSpecName "kube-api-access-skm6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:57:26.044351 kubelet[2508]: I0417 23:57:26.044281 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f4818ae-1272-408c-9941-3d075c787340-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1f4818ae-1272-408c-9941-3d075c787340" (UID: "1f4818ae-1272-408c-9941-3d075c787340"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:57:26.045146 kubelet[2508]: I0417 23:57:26.045104 2508 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "925c32f5-58fb-421f-aeb4-ec0de0b9bd25" (UID: "925c32f5-58fb-421f-aeb4-ec0de0b9bd25"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:57:26.137871 kubelet[2508]: I0417 23:57:26.137596 2508 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.137871 kubelet[2508]: I0417 23:57:26.137646 2508 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.137871 kubelet[2508]: I0417 23:57:26.137730 2508 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.137871 kubelet[2508]: I0417 23:57:26.137739 2508 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.137871 kubelet[2508]: I0417 23:57:26.137745 2508 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.137871 kubelet[2508]: I0417 23:57:26.137750 2508 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.137871 kubelet[2508]: I0417 23:57:26.137756 2508 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c97pc\" (UniqueName: \"kubernetes.io/projected/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-kube-api-access-c97pc\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.137871 kubelet[2508]: I0417 23:57:26.137783 2508 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.138238 kubelet[2508]: I0417 23:57:26.137789 2508 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.138238 kubelet[2508]: I0417 23:57:26.137795 2508 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.138238 kubelet[2508]: I0417 23:57:26.137800 2508 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.138238 kubelet[2508]: I0417 23:57:26.137806 2508 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/925c32f5-58fb-421f-aeb4-ec0de0b9bd25-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.138238 kubelet[2508]: I0417 23:57:26.137811 2508 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-skm6n\" (UniqueName: \"kubernetes.io/projected/1f4818ae-1272-408c-9941-3d075c787340-kube-api-access-skm6n\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.138238 kubelet[2508]: I0417 23:57:26.137817 2508 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f4818ae-1272-408c-9941-3d075c787340-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:57:26.693642 systemd[1]: Removed slice kubepods-besteffort-pod1f4818ae_1272_408c_9941_3d075c787340.slice - libcontainer container kubepods-besteffort-pod1f4818ae_1272_408c_9941_3d075c787340.slice. Apr 17 23:57:26.694839 systemd[1]: Removed slice kubepods-burstable-pod925c32f5_58fb_421f_aeb4_ec0de0b9bd25.slice - libcontainer container kubepods-burstable-pod925c32f5_58fb_421f_aeb4_ec0de0b9bd25.slice. Apr 17 23:57:26.694959 systemd[1]: kubepods-burstable-pod925c32f5_58fb_421f_aeb4_ec0de0b9bd25.slice: Consumed 6.208s CPU time. Apr 17 23:57:26.847297 systemd[1]: var-lib-kubelet-pods-1f4818ae\x2d1272\x2d408c\x2d9941\x2d3d075c787340-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskm6n.mount: Deactivated successfully. Apr 17 23:57:26.847442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686-rootfs.mount: Deactivated successfully. Apr 17 23:57:26.847513 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11ee1102c3dba1e45ffa4e2bfa9425dbc64091b70e37f5be679edc982229d686-shm.mount: Deactivated successfully. Apr 17 23:57:26.847608 systemd[1]: var-lib-kubelet-pods-925c32f5\x2d58fb\x2d421f\x2daeb4\x2dec0de0b9bd25-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc97pc.mount: Deactivated successfully. Apr 17 23:57:26.847729 systemd[1]: var-lib-kubelet-pods-925c32f5\x2d58fb\x2d421f\x2daeb4\x2dec0de0b9bd25-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 17 23:57:26.847820 systemd[1]: var-lib-kubelet-pods-925c32f5\x2d58fb\x2d421f\x2daeb4\x2dec0de0b9bd25-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 17 23:57:26.890511 kubelet[2508]: I0417 23:57:26.890437 2508 scope.go:117] "RemoveContainer" containerID="a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673" Apr 17 23:57:26.892282 containerd[1468]: time="2026-04-17T23:57:26.892250938Z" level=info msg="RemoveContainer for \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\"" Apr 17 23:57:26.900811 containerd[1468]: time="2026-04-17T23:57:26.900744674Z" level=info msg="RemoveContainer for \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\" returns successfully" Apr 17 23:57:26.901072 kubelet[2508]: I0417 23:57:26.901055 2508 scope.go:117] "RemoveContainer" containerID="7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f" Apr 17 23:57:26.902154 containerd[1468]: time="2026-04-17T23:57:26.902093225Z" level=info msg="RemoveContainer for \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\"" Apr 17 23:57:26.904903 containerd[1468]: time="2026-04-17T23:57:26.904853596Z" level=info msg="RemoveContainer for \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\" returns successfully" Apr 17 23:57:26.905121 kubelet[2508]: I0417 23:57:26.905084 2508 scope.go:117] "RemoveContainer" containerID="be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9" Apr 17 23:57:26.906607 containerd[1468]: time="2026-04-17T23:57:26.906559889Z" level=info msg="RemoveContainer for \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\"" Apr 17 23:57:26.910494 containerd[1468]: time="2026-04-17T23:57:26.910432138Z" level=info msg="RemoveContainer for \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\" returns successfully" Apr 17 23:57:26.911445 kubelet[2508]: I0417 23:57:26.911421 2508 scope.go:117] "RemoveContainer" containerID="23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d" Apr 17 23:57:26.912459 containerd[1468]: time="2026-04-17T23:57:26.912428903Z" level=info msg="RemoveContainer for \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\"" Apr 17 23:57:26.915164 containerd[1468]: time="2026-04-17T23:57:26.915121719Z" level=info msg="RemoveContainer for \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\" returns successfully" Apr 17 23:57:26.915393 kubelet[2508]: I0417 23:57:26.915348 2508 scope.go:117] "RemoveContainer" containerID="fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff" Apr 17 23:57:26.916633 containerd[1468]: time="2026-04-17T23:57:26.916368507Z" level=info msg="RemoveContainer for \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\"" Apr 17 23:57:26.919581 containerd[1468]: time="2026-04-17T23:57:26.919434604Z" level=info msg="RemoveContainer for \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\" returns successfully" Apr 17 23:57:26.919887 kubelet[2508]: I0417 23:57:26.919574 2508 scope.go:117] "RemoveContainer" containerID="a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673" Apr 17 23:57:26.924097 containerd[1468]: time="2026-04-17T23:57:26.924007104Z" level=error msg="ContainerStatus for \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\": not found" Apr 17 23:57:26.933116 kubelet[2508]: E0417 23:57:26.933055 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\": not found" containerID="a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673" Apr 17 23:57:26.933116 kubelet[2508]: I0417 23:57:26.933089 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673"} err="failed to get container status \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\": rpc error: code = NotFound desc = an error occurred when try to find container \"a509064c6baea86d6df60d6f7f1f12ea10cbcf26efcc73effbe8b13ba53d6673\": not found" Apr 17 23:57:26.933116 kubelet[2508]: I0417 23:57:26.933124 2508 scope.go:117] "RemoveContainer" containerID="7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f" Apr 17 23:57:26.933445 containerd[1468]: time="2026-04-17T23:57:26.933408149Z" level=error msg="ContainerStatus for \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\": not found" Apr 17 23:57:26.933696 kubelet[2508]: E0417 23:57:26.933638 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\": not found" containerID="7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f" Apr 17 23:57:26.933803 kubelet[2508]: I0417 23:57:26.933694 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f"} err="failed to get container status \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7479ea7df0517389ec9b8b0cb16bab96bf7790b588be35677ea5ca662442596f\": not found" Apr 17 23:57:26.933803 kubelet[2508]: I0417 23:57:26.933706 2508 scope.go:117] "RemoveContainer" containerID="be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9" Apr 17 23:57:26.934016 containerd[1468]: time="2026-04-17T23:57:26.933964807Z" level=error msg="ContainerStatus for \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\": not found" Apr 17 23:57:26.934238 kubelet[2508]: E0417 23:57:26.934200 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\": not found" containerID="be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9" Apr 17 23:57:26.934262 kubelet[2508]: I0417 23:57:26.934232 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9"} err="failed to get container status \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"be845bf6612f2167fa8d737579cd991081d79d070cf30c0169536db71f8685c9\": not found" Apr 17 23:57:26.934262 kubelet[2508]: I0417 23:57:26.934253 2508 scope.go:117] "RemoveContainer" containerID="23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d" Apr 17 23:57:26.934494 containerd[1468]: time="2026-04-17T23:57:26.934469931Z" level=error msg="ContainerStatus for \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\": not found" Apr 17 23:57:26.934621 kubelet[2508]: E0417 23:57:26.934550 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\": not found" containerID="23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d" Apr 17 23:57:26.934621 kubelet[2508]: I0417 23:57:26.934586 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d"} err="failed to get container status \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"23a721cd0ef22ff8e7a646967b7975fb6d2bc7acd4573bfbe55a07fb0259aa9d\": not found" Apr 17 23:57:26.934621 kubelet[2508]: I0417 23:57:26.934603 2508 scope.go:117] "RemoveContainer" containerID="fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff" Apr 17 23:57:26.934855 containerd[1468]: time="2026-04-17T23:57:26.934830616Z" level=error msg="ContainerStatus for \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\": not found" Apr 17 23:57:26.934947 kubelet[2508]: E0417 23:57:26.934907 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\": not found" containerID="fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff" Apr 17 23:57:26.934947 kubelet[2508]: I0417 23:57:26.934932 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff"} err="failed to get container status \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb76d29f4004ec8a3c0a171f03d3301365daf121d53b5e9c31cc1548107ca2ff\": not found" Apr 17 23:57:26.935003 kubelet[2508]: I0417 23:57:26.934949 2508 scope.go:117] "RemoveContainer" containerID="d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b" Apr 17 23:57:26.936864 containerd[1468]: time="2026-04-17T23:57:26.936816853Z" level=info msg="RemoveContainer for \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\"" Apr 17 23:57:26.940621 containerd[1468]: time="2026-04-17T23:57:26.940569902Z" level=info msg="RemoveContainer for \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\" returns successfully" Apr 17 23:57:26.940931 kubelet[2508]: I0417 23:57:26.940904 2508 scope.go:117] "RemoveContainer" containerID="d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b" Apr 17 23:57:26.941137 containerd[1468]: time="2026-04-17T23:57:26.941096488Z" level=error msg="ContainerStatus for \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\": not found" Apr 17 23:57:26.941271 kubelet[2508]: E0417 23:57:26.941232 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\": not found" containerID="d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b" Apr 17 23:57:26.941309 kubelet[2508]: I0417 23:57:26.941277 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b"} err="failed to get container status \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3b315492fb61ff19d972cf8931ba3782277017161976297e9ac80f4db46942b\": not found" Apr 17 23:57:27.799076 sshd[4128]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:27.810486 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:50314.service: Deactivated successfully. Apr 17 23:57:27.812101 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:57:27.813421 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:57:27.814488 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:50320.service - OpenSSH per-connection server daemon (10.0.0.1:50320). Apr 17 23:57:27.814964 systemd-logind[1449]: Removed session 22. Apr 17 23:57:27.852681 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 50320 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:27.854419 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:27.859152 systemd-logind[1449]: New session 23 of user core. Apr 17 23:57:27.872959 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:57:28.332936 sshd[4293]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:28.345768 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:50320.service: Deactivated successfully. Apr 17 23:57:28.348432 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:57:28.350623 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:57:28.361913 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:50336.service - OpenSSH per-connection server daemon (10.0.0.1:50336). Apr 17 23:57:28.364264 systemd-logind[1449]: Removed session 23. Apr 17 23:57:28.383522 systemd[1]: Created slice kubepods-burstable-pod42e67792_0421_44fc_bffc_9e40b4eba33d.slice - libcontainer container kubepods-burstable-pod42e67792_0421_44fc_bffc_9e40b4eba33d.slice. Apr 17 23:57:28.396197 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 50336 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:28.397338 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:28.400432 systemd-logind[1449]: New session 24 of user core. Apr 17 23:57:28.407835 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:57:28.465325 sshd[4306]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:28.466827 kubelet[2508]: I0417 23:57:28.466129 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-hostproc\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.466827 kubelet[2508]: I0417 23:57:28.466155 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-lib-modules\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.466827 kubelet[2508]: I0417 23:57:28.466172 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42e67792-0421-44fc-bffc-9e40b4eba33d-hubble-tls\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.466827 kubelet[2508]: I0417 23:57:28.466200 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-cilium-run\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.466827 kubelet[2508]: I0417 23:57:28.466211 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-cilium-cgroup\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.466827 kubelet[2508]: I0417 23:57:28.466222 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-cni-path\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.470993 kubelet[2508]: I0417 23:57:28.466231 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42e67792-0421-44fc-bffc-9e40b4eba33d-clustermesh-secrets\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.470993 kubelet[2508]: I0417 23:57:28.466241 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-etc-cni-netd\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.470993 kubelet[2508]: I0417 23:57:28.466250 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/42e67792-0421-44fc-bffc-9e40b4eba33d-cilium-ipsec-secrets\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.470993 kubelet[2508]: I0417 23:57:28.466259 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-host-proc-sys-net\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.470993 kubelet[2508]: I0417 23:57:28.466269 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-host-proc-sys-kernel\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.471114 kubelet[2508]: I0417 23:57:28.466284 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42e67792-0421-44fc-bffc-9e40b4eba33d-cilium-config-path\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.471114 kubelet[2508]: I0417 23:57:28.466295 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-bpf-maps\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.471114 kubelet[2508]: I0417 23:57:28.466306 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9n88\" (UniqueName: \"kubernetes.io/projected/42e67792-0421-44fc-bffc-9e40b4eba33d-kube-api-access-q9n88\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.471114 kubelet[2508]: I0417 23:57:28.466320 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42e67792-0421-44fc-bffc-9e40b4eba33d-xtables-lock\") pod \"cilium-mnrmf\" (UID: \"42e67792-0421-44fc-bffc-9e40b4eba33d\") " pod="kube-system/cilium-mnrmf" Apr 17 23:57:28.472161 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:50336.service: Deactivated successfully. Apr 17 23:57:28.473475 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:57:28.474796 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:57:28.479987 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:50338.service - OpenSSH per-connection server daemon (10.0.0.1:50338). Apr 17 23:57:28.480958 systemd-logind[1449]: Removed session 24. Apr 17 23:57:28.510123 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 50338 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:28.511093 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:28.515359 systemd-logind[1449]: New session 25 of user core. Apr 17 23:57:28.522842 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 23:57:28.691183 kubelet[2508]: I0417 23:57:28.690972 2508 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4818ae-1272-408c-9941-3d075c787340" path="/var/lib/kubelet/pods/1f4818ae-1272-408c-9941-3d075c787340/volumes" Apr 17 23:57:28.691626 kubelet[2508]: E0417 23:57:28.691486 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:28.691626 kubelet[2508]: I0417 23:57:28.691529 2508 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925c32f5-58fb-421f-aeb4-ec0de0b9bd25" path="/var/lib/kubelet/pods/925c32f5-58fb-421f-aeb4-ec0de0b9bd25/volumes" Apr 17 23:57:28.692516 containerd[1468]: time="2026-04-17T23:57:28.692363135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mnrmf,Uid:42e67792-0421-44fc-bffc-9e40b4eba33d,Namespace:kube-system,Attempt:0,}" Apr 17 23:57:28.719234 containerd[1468]: time="2026-04-17T23:57:28.718300501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:57:28.719234 containerd[1468]: time="2026-04-17T23:57:28.718874994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:57:28.719234 containerd[1468]: time="2026-04-17T23:57:28.718913415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:28.719234 containerd[1468]: time="2026-04-17T23:57:28.719077677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:57:28.739907 systemd[1]: Started cri-containerd-33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8.scope - libcontainer container 33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8. Apr 17 23:57:28.764390 containerd[1468]: time="2026-04-17T23:57:28.764324925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mnrmf,Uid:42e67792-0421-44fc-bffc-9e40b4eba33d,Namespace:kube-system,Attempt:0,} returns sandbox id \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\"" Apr 17 23:57:28.766419 kubelet[2508]: E0417 23:57:28.766365 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:28.771819 containerd[1468]: time="2026-04-17T23:57:28.771725382Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:57:28.784533 containerd[1468]: time="2026-04-17T23:57:28.784360204Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"986fd72874af0609b890adad463b8949807b6cdafa8b5ea09d1515758d56345c\"" Apr 17 23:57:28.785163 containerd[1468]: time="2026-04-17T23:57:28.785027862Z" level=info msg="StartContainer for \"986fd72874af0609b890adad463b8949807b6cdafa8b5ea09d1515758d56345c\"" Apr 17 23:57:28.815100 systemd[1]: Started cri-containerd-986fd72874af0609b890adad463b8949807b6cdafa8b5ea09d1515758d56345c.scope - libcontainer container 986fd72874af0609b890adad463b8949807b6cdafa8b5ea09d1515758d56345c. Apr 17 23:57:28.838710 containerd[1468]: time="2026-04-17T23:57:28.837055264Z" level=info msg="StartContainer for \"986fd72874af0609b890adad463b8949807b6cdafa8b5ea09d1515758d56345c\" returns successfully" Apr 17 23:57:28.845299 systemd[1]: cri-containerd-986fd72874af0609b890adad463b8949807b6cdafa8b5ea09d1515758d56345c.scope: Deactivated successfully. Apr 17 23:57:28.875871 containerd[1468]: time="2026-04-17T23:57:28.875799617Z" level=info msg="shim disconnected" id=986fd72874af0609b890adad463b8949807b6cdafa8b5ea09d1515758d56345c namespace=k8s.io Apr 17 23:57:28.875871 containerd[1468]: time="2026-04-17T23:57:28.875859379Z" level=warning msg="cleaning up after shim disconnected" id=986fd72874af0609b890adad463b8949807b6cdafa8b5ea09d1515758d56345c namespace=k8s.io Apr 17 23:57:28.875871 containerd[1468]: time="2026-04-17T23:57:28.875868139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:57:28.900366 kubelet[2508]: E0417 23:57:28.900271 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:28.905342 containerd[1468]: time="2026-04-17T23:57:28.905289503Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:57:28.918206 containerd[1468]: time="2026-04-17T23:57:28.918157875Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c8e96ec2d89fe9c1fef562c93dc21fb1c68c05d700f4179ca5ab23c9d37ed845\"" Apr 17 23:57:28.918818 containerd[1468]: time="2026-04-17T23:57:28.918753358Z" level=info msg="StartContainer for \"c8e96ec2d89fe9c1fef562c93dc21fb1c68c05d700f4179ca5ab23c9d37ed845\"" Apr 17 23:57:28.944859 systemd[1]: Started cri-containerd-c8e96ec2d89fe9c1fef562c93dc21fb1c68c05d700f4179ca5ab23c9d37ed845.scope - libcontainer container c8e96ec2d89fe9c1fef562c93dc21fb1c68c05d700f4179ca5ab23c9d37ed845. Apr 17 23:57:28.966570 containerd[1468]: time="2026-04-17T23:57:28.966524704Z" level=info msg="StartContainer for \"c8e96ec2d89fe9c1fef562c93dc21fb1c68c05d700f4179ca5ab23c9d37ed845\" returns successfully" Apr 17 23:57:28.972291 systemd[1]: cri-containerd-c8e96ec2d89fe9c1fef562c93dc21fb1c68c05d700f4179ca5ab23c9d37ed845.scope: Deactivated successfully. Apr 17 23:57:28.991999 containerd[1468]: time="2026-04-17T23:57:28.991932783Z" level=info msg="shim disconnected" id=c8e96ec2d89fe9c1fef562c93dc21fb1c68c05d700f4179ca5ab23c9d37ed845 namespace=k8s.io Apr 17 23:57:28.992363 containerd[1468]: time="2026-04-17T23:57:28.992290413Z" level=warning msg="cleaning up after shim disconnected" id=c8e96ec2d89fe9c1fef562c93dc21fb1c68c05d700f4179ca5ab23c9d37ed845 namespace=k8s.io Apr 17 23:57:28.992363 containerd[1468]: time="2026-04-17T23:57:28.992338255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:57:29.926316 kubelet[2508]: E0417 23:57:29.926044 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:29.954161 containerd[1468]: time="2026-04-17T23:57:29.954005589Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:57:29.974058 containerd[1468]: time="2026-04-17T23:57:29.973963602Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7350f938230c52e5662b153172e6d36cff9e9dab315a87de3a23ae3851808d40\"" Apr 17 23:57:29.974803 containerd[1468]: time="2026-04-17T23:57:29.974718326Z" level=info msg="StartContainer for \"7350f938230c52e5662b153172e6d36cff9e9dab315a87de3a23ae3851808d40\"" Apr 17 23:57:30.012368 systemd[1]: Started cri-containerd-7350f938230c52e5662b153172e6d36cff9e9dab315a87de3a23ae3851808d40.scope - libcontainer container 7350f938230c52e5662b153172e6d36cff9e9dab315a87de3a23ae3851808d40. Apr 17 23:57:30.036389 containerd[1468]: time="2026-04-17T23:57:30.036348110Z" level=info msg="StartContainer for \"7350f938230c52e5662b153172e6d36cff9e9dab315a87de3a23ae3851808d40\" returns successfully" Apr 17 23:57:30.036553 systemd[1]: cri-containerd-7350f938230c52e5662b153172e6d36cff9e9dab315a87de3a23ae3851808d40.scope: Deactivated successfully. Apr 17 23:57:30.062984 containerd[1468]: time="2026-04-17T23:57:30.062904027Z" level=info msg="shim disconnected" id=7350f938230c52e5662b153172e6d36cff9e9dab315a87de3a23ae3851808d40 namespace=k8s.io Apr 17 23:57:30.062984 containerd[1468]: time="2026-04-17T23:57:30.062950095Z" level=warning msg="cleaning up after shim disconnected" id=7350f938230c52e5662b153172e6d36cff9e9dab315a87de3a23ae3851808d40 namespace=k8s.io Apr 17 23:57:30.062984 containerd[1468]: time="2026-04-17T23:57:30.062956717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:57:30.570824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7350f938230c52e5662b153172e6d36cff9e9dab315a87de3a23ae3851808d40-rootfs.mount: Deactivated successfully. Apr 17 23:57:30.687998 kubelet[2508]: E0417 23:57:30.687922 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:30.729210 kubelet[2508]: E0417 23:57:30.729168 2508 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:57:30.930112 kubelet[2508]: E0417 23:57:30.929849 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:30.933857 containerd[1468]: time="2026-04-17T23:57:30.933782164Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:57:30.946233 containerd[1468]: time="2026-04-17T23:57:30.946193729Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"136b1ec752fb6b9e11379e8aa62031c1c62f3a6a9617e327807842399d5b1668\"" Apr 17 23:57:30.946817 containerd[1468]: time="2026-04-17T23:57:30.946771071Z" level=info msg="StartContainer for \"136b1ec752fb6b9e11379e8aa62031c1c62f3a6a9617e327807842399d5b1668\"" Apr 17 23:57:30.972857 systemd[1]: Started cri-containerd-136b1ec752fb6b9e11379e8aa62031c1c62f3a6a9617e327807842399d5b1668.scope - libcontainer container 136b1ec752fb6b9e11379e8aa62031c1c62f3a6a9617e327807842399d5b1668. Apr 17 23:57:30.990882 systemd[1]: cri-containerd-136b1ec752fb6b9e11379e8aa62031c1c62f3a6a9617e327807842399d5b1668.scope: Deactivated successfully. Apr 17 23:57:30.994000 containerd[1468]: time="2026-04-17T23:57:30.993964819Z" level=info msg="StartContainer for \"136b1ec752fb6b9e11379e8aa62031c1c62f3a6a9617e327807842399d5b1668\" returns successfully" Apr 17 23:57:31.012499 containerd[1468]: time="2026-04-17T23:57:31.012429406Z" level=info msg="shim disconnected" id=136b1ec752fb6b9e11379e8aa62031c1c62f3a6a9617e327807842399d5b1668 namespace=k8s.io Apr 17 23:57:31.012499 containerd[1468]: time="2026-04-17T23:57:31.012487004Z" level=warning msg="cleaning up after shim disconnected" id=136b1ec752fb6b9e11379e8aa62031c1c62f3a6a9617e327807842399d5b1668 namespace=k8s.io Apr 17 23:57:31.012499 containerd[1468]: time="2026-04-17T23:57:31.012495308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:57:31.571167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-136b1ec752fb6b9e11379e8aa62031c1c62f3a6a9617e327807842399d5b1668-rootfs.mount: Deactivated successfully. Apr 17 23:57:31.936274 kubelet[2508]: E0417 23:57:31.935928 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:31.940252 containerd[1468]: time="2026-04-17T23:57:31.940175879Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:57:31.958832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038903166.mount: Deactivated successfully. Apr 17 23:57:31.960988 containerd[1468]: time="2026-04-17T23:57:31.960936753Z" level=info msg="CreateContainer within sandbox \"33499a76e4e72169177eff6ff5ac6530730f4989e0f8f86cfe94fc65491a4bc8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4dbfc6a0647b9c49ba4ddbd140ba5d2acac495eb5703dccec6dffc3cf019b555\"" Apr 17 23:57:31.962286 containerd[1468]: time="2026-04-17T23:57:31.961552041Z" level=info msg="StartContainer for \"4dbfc6a0647b9c49ba4ddbd140ba5d2acac495eb5703dccec6dffc3cf019b555\"" Apr 17 23:57:31.981919 systemd[1]: Started cri-containerd-4dbfc6a0647b9c49ba4ddbd140ba5d2acac495eb5703dccec6dffc3cf019b555.scope - libcontainer container 4dbfc6a0647b9c49ba4ddbd140ba5d2acac495eb5703dccec6dffc3cf019b555. Apr 17 23:57:32.004810 containerd[1468]: time="2026-04-17T23:57:32.004732412Z" level=info msg="StartContainer for \"4dbfc6a0647b9c49ba4ddbd140ba5d2acac495eb5703dccec6dffc3cf019b555\" returns successfully" Apr 17 23:57:32.113987 kubelet[2508]: I0417 23:57:32.113512 2508 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-17T23:57:32Z","lastTransitionTime":"2026-04-17T23:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 17 23:57:32.222693 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 17 23:57:32.943075 kubelet[2508]: E0417 23:57:32.943026 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:32.958560 kubelet[2508]: I0417 23:57:32.958490 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mnrmf" podStartSLOduration=4.9584658059999995 podStartE2EDuration="4.958465806s" podCreationTimestamp="2026-04-17 23:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:57:32.958401173 +0000 UTC m=+72.366697533" watchObservedRunningTime="2026-04-17 23:57:32.958465806 +0000 UTC m=+72.366762160" Apr 17 23:57:34.690562 kubelet[2508]: E0417 23:57:34.690517 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:34.847158 systemd[1]: run-containerd-runc-k8s.io-4dbfc6a0647b9c49ba4ddbd140ba5d2acac495eb5703dccec6dffc3cf019b555-runc.wbeIEK.mount: Deactivated successfully. Apr 17 23:57:35.011375 systemd-networkd[1391]: lxc_health: Link UP Apr 17 23:57:35.021017 systemd-networkd[1391]: lxc_health: Gained carrier Apr 17 23:57:36.254080 systemd-networkd[1391]: lxc_health: Gained IPv6LL Apr 17 23:57:36.690532 kubelet[2508]: E0417 23:57:36.690325 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:36.952040 kubelet[2508]: E0417 23:57:36.951475 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:36.955015 systemd[1]: run-containerd-runc-k8s.io-4dbfc6a0647b9c49ba4ddbd140ba5d2acac495eb5703dccec6dffc3cf019b555-runc.BmDgcm.mount: Deactivated successfully. Apr 17 23:57:37.954090 kubelet[2508]: E0417 23:57:37.953940 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:41.206144 sshd[4314]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:41.209416 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:50338.service: Deactivated successfully. Apr 17 23:57:41.211312 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 23:57:41.212427 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Apr 17 23:57:41.213994 systemd-logind[1449]: Removed session 25. Apr 17 23:57:41.687917 kubelet[2508]: E0417 23:57:41.687836 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"