Apr 16 02:06:34.205978 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 02:06:34.206002 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 02:06:34.206012 kernel: BIOS-provided physical RAM map: Apr 16 02:06:34.206017 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 02:06:34.206022 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 16 02:06:34.206027 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 16 02:06:34.206033 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 16 02:06:34.206038 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 16 02:06:34.206043 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 16 02:06:34.206049 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 16 02:06:34.206056 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 16 02:06:34.206061 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 16 02:06:34.206066 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 16 02:06:34.206071 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 16 02:06:34.206077 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 16 02:06:34.206083 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 16 02:06:34.206090 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 16 02:06:34.206095 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 16 02:06:34.206101 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 16 02:06:34.206106 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 02:06:34.206111 kernel: NX (Execute Disable) protection: active Apr 16 02:06:34.206117 kernel: APIC: Static calls initialized Apr 16 02:06:34.206122 kernel: efi: EFI v2.7 by EDK II Apr 16 02:06:34.206128 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 16 02:06:34.206133 kernel: SMBIOS 2.8 present. Apr 16 02:06:34.206139 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 16 02:06:34.206144 kernel: Hypervisor detected: KVM Apr 16 02:06:34.206151 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 02:06:34.207872 kernel: kvm-clock: using sched offset of 10317450942 cycles Apr 16 02:06:34.207908 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 02:06:34.207913 kernel: tsc: Detected 2793.438 MHz processor Apr 16 02:06:34.207918 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 02:06:34.207924 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 02:06:34.207929 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 16 02:06:34.207934 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 16 02:06:34.207939 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 02:06:34.208012 kernel: Using GB pages for direct mapping Apr 16 02:06:34.208017 kernel: Secure boot disabled Apr 16 02:06:34.208021 kernel: ACPI: Early table checksum verification disabled Apr 16 02:06:34.208059 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 16 02:06:34.208067 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 16 02:06:34.208072 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:06:34.208077 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:06:34.208084 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 16 02:06:34.208089 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:06:34.208094 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:06:34.208099 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:06:34.208104 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:06:34.208109 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 02:06:34.208114 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 16 02:06:34.208121 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 16 02:06:34.208126 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 16 02:06:34.208131 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 16 02:06:34.208136 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 16 02:06:34.208141 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 16 02:06:34.208146 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 16 02:06:34.208151 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 16 02:06:34.210061 kernel: No NUMA configuration found Apr 16 02:06:34.210138 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 16 02:06:34.210459 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 16 02:06:34.210467 kernel: Zone ranges: Apr 16 02:06:34.210473 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 02:06:34.210479 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 16 02:06:34.210486 kernel: Normal empty Apr 16 02:06:34.210492 kernel: Movable zone start for each node Apr 16 02:06:34.210498 kernel: Early memory node ranges Apr 16 02:06:34.210504 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 16 02:06:34.210510 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 16 02:06:34.210516 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 16 02:06:34.210905 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 16 02:06:34.210910 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 16 02:06:34.210915 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 16 02:06:34.210920 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 16 02:06:34.210925 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 02:06:34.210930 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 16 02:06:34.210935 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 16 02:06:34.210941 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 02:06:34.210946 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 16 02:06:34.210971 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 16 02:06:34.210998 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 16 02:06:34.211003 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 02:06:34.211008 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 02:06:34.211014 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 02:06:34.211019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 02:06:34.211024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 02:06:34.211029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 02:06:34.211034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 02:06:34.211041 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 02:06:34.211046 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 02:06:34.211051 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 02:06:34.211056 kernel: TSC deadline timer available Apr 16 02:06:34.211062 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 16 02:06:34.211067 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 02:06:34.211072 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 02:06:34.211077 kernel: kvm-guest: setup PV sched yield Apr 16 02:06:34.211082 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 16 02:06:34.211089 kernel: Booting paravirtualized kernel on KVM Apr 16 02:06:34.211094 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 02:06:34.211099 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 02:06:34.211104 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 16 02:06:34.211109 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 16 02:06:34.211115 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 02:06:34.211120 kernel: kvm-guest: PV spinlocks enabled Apr 16 02:06:34.211125 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 02:06:34.211131 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 02:06:34.211139 kernel: random: crng init done Apr 16 02:06:34.211144 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 02:06:34.211149 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 02:06:34.212099 kernel: Fallback order for Node 0: 0 Apr 16 02:06:34.212727 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 16 02:06:34.212735 kernel: Policy zone: DMA32 Apr 16 02:06:34.212741 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 02:06:34.212747 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 167136K reserved, 0K cma-reserved) Apr 16 02:06:34.212789 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 02:06:34.212795 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 02:06:34.212802 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 02:06:34.212808 kernel: Dynamic Preempt: voluntary Apr 16 02:06:34.212814 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 02:06:34.212827 kernel: rcu: RCU event tracing is enabled. Apr 16 02:06:34.212834 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 02:06:34.212840 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 02:06:34.212846 kernel: Rude variant of Tasks RCU enabled. Apr 16 02:06:34.212851 kernel: Tracing variant of Tasks RCU enabled. Apr 16 02:06:34.212856 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 02:06:34.212862 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 02:06:34.212870 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 02:06:34.212875 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 02:06:34.212881 kernel: Console: colour dummy device 80x25 Apr 16 02:06:34.212886 kernel: printk: console [ttyS0] enabled Apr 16 02:06:34.212892 kernel: ACPI: Core revision 20230628 Apr 16 02:06:34.212899 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 02:06:34.212905 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 02:06:34.212910 kernel: x2apic enabled Apr 16 02:06:34.212916 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 02:06:34.212921 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 02:06:34.212927 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 02:06:34.212933 kernel: kvm-guest: setup PV IPIs Apr 16 02:06:34.212938 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 02:06:34.212944 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 02:06:34.212951 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 02:06:34.212957 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 02:06:34.212962 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 02:06:34.212968 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 02:06:34.212974 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 02:06:34.212979 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 02:06:34.212985 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 02:06:34.212990 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 02:06:34.212996 kernel: RETBleed: Vulnerable Apr 16 02:06:34.213003 kernel: Speculative Store Bypass: Vulnerable Apr 16 02:06:34.213009 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 02:06:34.213014 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 02:06:34.213020 kernel: active return thunk: its_return_thunk Apr 16 02:06:34.213025 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 02:06:34.213031 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 02:06:34.213036 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 02:06:34.213042 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 02:06:34.213047 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 02:06:34.213055 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 02:06:34.213060 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 02:06:34.213066 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 02:06:34.213071 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 02:06:34.213077 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 02:06:34.213082 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 02:06:34.213087 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 02:06:34.213093 kernel: Freeing SMP alternatives memory: 32K Apr 16 02:06:34.213098 kernel: pid_max: default: 32768 minimum: 301 Apr 16 02:06:34.213106 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 02:06:34.213111 kernel: landlock: Up and running. Apr 16 02:06:34.213116 kernel: SELinux: Initializing. Apr 16 02:06:34.213122 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 02:06:34.213127 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 02:06:34.213133 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 02:06:34.213139 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:06:34.213145 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:06:34.213152 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:06:34.228704 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 02:06:34.229111 kernel: signal: max sigframe size: 3632 Apr 16 02:06:34.229122 kernel: rcu: Hierarchical SRCU implementation. Apr 16 02:06:34.229134 kernel: rcu: Max phase no-delay instances is 400. Apr 16 02:06:34.229150 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 02:06:34.229591 kernel: smp: Bringing up secondary CPUs ... Apr 16 02:06:34.229603 kernel: smpboot: x86: Booting SMP configuration: Apr 16 02:06:34.229616 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 02:06:34.229625 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 02:06:34.229789 kernel: smpboot: Max logical packages: 1 Apr 16 02:06:34.229797 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 02:06:34.229805 kernel: devtmpfs: initialized Apr 16 02:06:34.229812 kernel: x86/mm: Memory block size: 128MB Apr 16 02:06:34.229820 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 16 02:06:34.229828 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 16 02:06:34.229840 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 16 02:06:34.229849 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 16 02:06:34.229981 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 16 02:06:34.229991 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 02:06:34.230002 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 02:06:34.230012 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 02:06:34.230024 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 02:06:34.230033 kernel: audit: initializing netlink subsys (disabled) Apr 16 02:06:34.230044 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 02:06:34.230054 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 02:06:34.230063 kernel: audit: type=2000 audit(1776305186.295:1): state=initialized audit_enabled=0 res=1 Apr 16 02:06:34.230073 kernel: cpuidle: using governor menu Apr 16 02:06:34.230084 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 02:06:34.230092 kernel: dca service started, version 1.12.1 Apr 16 02:06:34.230100 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 02:06:34.230108 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 02:06:34.230118 kernel: PCI: Using configuration type 1 for base access Apr 16 02:06:34.230128 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 02:06:34.230136 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 02:06:34.230151 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 02:06:34.230903 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 02:06:34.230919 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 02:06:34.230931 kernel: ACPI: Added _OSI(Module Device) Apr 16 02:06:34.230941 kernel: ACPI: Added _OSI(Processor Device) Apr 16 02:06:34.230952 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 02:06:34.230962 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 02:06:34.230971 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 02:06:34.230981 kernel: ACPI: Interpreter enabled Apr 16 02:06:34.230990 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 02:06:34.231001 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 02:06:34.231015 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 02:06:34.231030 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 02:06:34.231042 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 02:06:34.231051 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 02:06:34.233140 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 02:06:34.246990 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 02:06:34.247151 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 02:06:34.247906 kernel: PCI host bridge to bus 0000:00 Apr 16 02:06:34.251079 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 02:06:34.251661 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 02:06:34.251751 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 02:06:34.251824 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 02:06:34.251902 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 02:06:34.251972 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 16 02:06:34.252048 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 02:06:34.252146 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 02:06:34.252760 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 16 02:06:34.252841 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 16 02:06:34.252921 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 16 02:06:34.253003 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 16 02:06:34.253127 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 16 02:06:34.253711 kernel: pci 0000:00:01.0: efifb_fixup_resources+0x0/0x140 took 10742 usecs Apr 16 02:06:34.253772 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 02:06:34.253828 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 15625 usecs Apr 16 02:06:34.253891 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 16 02:06:34.253950 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 16 02:06:34.254005 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 16 02:06:34.254077 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 16 02:06:34.254594 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 16 02:06:34.254671 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 16 02:06:34.254727 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 16 02:06:34.254784 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 16 02:06:34.254844 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 16 02:06:34.254900 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 16 02:06:34.254965 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 16 02:06:34.255021 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 16 02:06:34.255076 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 16 02:06:34.255136 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 02:06:34.255669 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 02:06:34.255763 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 15625 usecs Apr 16 02:06:34.255827 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 02:06:34.255895 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 16 02:06:34.255953 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 16 02:06:34.256014 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 02:06:34.256070 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 16 02:06:34.256082 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 02:06:34.256091 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 02:06:34.256099 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 02:06:34.256110 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 02:06:34.256118 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 02:06:34.256126 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 02:06:34.256134 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 02:06:34.256142 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 02:06:34.256150 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 02:06:34.256742 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 02:06:34.256755 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 02:06:34.256766 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 02:06:34.256784 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 02:06:34.256794 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 02:06:34.256806 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 02:06:34.256816 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 02:06:34.256826 kernel: iommu: Default domain type: Translated Apr 16 02:06:34.256835 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 02:06:34.256845 kernel: efivars: Registered efivars operations Apr 16 02:06:34.256854 kernel: PCI: Using ACPI for IRQ routing Apr 16 02:06:34.256864 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 02:06:34.256875 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 16 02:06:34.256889 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 16 02:06:34.256896 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 16 02:06:34.256902 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 16 02:06:34.256985 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 02:06:34.257045 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 02:06:34.257101 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 02:06:34.257112 kernel: vgaarb: loaded Apr 16 02:06:34.257125 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 02:06:34.257136 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 02:06:34.257146 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 02:06:34.257154 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 02:06:34.257550 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 02:06:34.257557 kernel: pnp: PnP ACPI init Apr 16 02:06:34.257636 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 02:06:34.257645 kernel: pnp: PnP ACPI: found 6 devices Apr 16 02:06:34.257652 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 02:06:34.257657 kernel: NET: Registered PF_INET protocol family Apr 16 02:06:34.257666 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 02:06:34.257672 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 02:06:34.257678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 02:06:34.257684 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 02:06:34.257690 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 02:06:34.257695 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 02:06:34.257701 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 02:06:34.257707 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 02:06:34.257714 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 02:06:34.257720 kernel: NET: Registered PF_XDP protocol family Apr 16 02:06:34.257778 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 16 02:06:34.257835 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 16 02:06:34.257888 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 02:06:34.257940 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 02:06:34.257989 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 02:06:34.258039 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 02:06:34.258090 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 02:06:34.258140 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 16 02:06:34.258147 kernel: PCI: CLS 0 bytes, default 64 Apr 16 02:06:34.258153 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 02:06:34.258511 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 02:06:34.258517 kernel: Initialise system trusted keyrings Apr 16 02:06:34.258522 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 02:06:34.258528 kernel: Key type asymmetric registered Apr 16 02:06:34.258534 kernel: Asymmetric key parser 'x509' registered Apr 16 02:06:34.258542 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 02:06:34.258547 kernel: io scheduler mq-deadline registered Apr 16 02:06:34.258553 kernel: io scheduler kyber registered Apr 16 02:06:34.258558 kernel: io scheduler bfq registered Apr 16 02:06:34.258564 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 02:06:34.258570 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 02:06:34.258576 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 02:06:34.258582 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 02:06:34.258588 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 02:06:34.258595 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 02:06:34.258601 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 02:06:34.258606 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 02:06:34.258612 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 02:06:34.258618 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 02:06:34.258684 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 02:06:34.258737 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 02:06:34.258791 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T02:06:32 UTC (1776305192) Apr 16 02:06:34.258845 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 16 02:06:34.258851 kernel: intel_pstate: CPU model not supported Apr 16 02:06:34.258857 kernel: efifb: probing for efifb Apr 16 02:06:34.258862 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 16 02:06:34.258868 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 16 02:06:34.258874 kernel: efifb: scrolling: redraw Apr 16 02:06:34.258891 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 16 02:06:34.258898 kernel: Console: switching to colour frame buffer device 100x37 Apr 16 02:06:34.258904 kernel: fb0: EFI VGA frame buffer device Apr 16 02:06:34.258911 kernel: pstore: Using crash dump compression: deflate Apr 16 02:06:34.258917 kernel: pstore: Registered efi_pstore as persistent store backend Apr 16 02:06:34.258923 kernel: NET: Registered PF_INET6 protocol family Apr 16 02:06:34.258928 kernel: Segment Routing with IPv6 Apr 16 02:06:34.258934 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 02:06:34.258940 kernel: NET: Registered PF_PACKET protocol family Apr 16 02:06:34.258945 kernel: Key type dns_resolver registered Apr 16 02:06:34.258951 kernel: IPI shorthand broadcast: enabled Apr 16 02:06:34.258957 kernel: sched_clock: Marking stable (4613251242, 1788414551)->(7403476890, -1001811097) Apr 16 02:06:34.258964 kernel: registered taskstats version 1 Apr 16 02:06:34.258969 kernel: Loading compiled-in X.509 certificates Apr 16 02:06:34.258975 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 02:06:34.258981 kernel: Key type .fscrypt registered Apr 16 02:06:34.258986 kernel: Key type fscrypt-provisioning registered Apr 16 02:06:34.258992 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 02:06:34.258998 kernel: ima: Allocated hash algorithm: sha1 Apr 16 02:06:34.259003 kernel: ima: No architecture policies found Apr 16 02:06:34.259009 kernel: clk: Disabling unused clocks Apr 16 02:06:34.259016 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 02:06:34.259022 kernel: Write protecting the kernel read-only data: 36864k Apr 16 02:06:34.259027 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 02:06:34.259034 kernel: Run /init as init process Apr 16 02:06:34.259040 kernel: with arguments: Apr 16 02:06:34.259046 kernel: /init Apr 16 02:06:34.259052 kernel: with environment: Apr 16 02:06:34.259058 kernel: HOME=/ Apr 16 02:06:34.259063 kernel: TERM=linux Apr 16 02:06:34.259072 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 02:06:34.259080 systemd[1]: Detected virtualization kvm. Apr 16 02:06:34.259086 systemd[1]: Detected architecture x86-64. Apr 16 02:06:34.259092 systemd[1]: Running in initrd. Apr 16 02:06:34.259100 systemd[1]: No hostname configured, using default hostname. Apr 16 02:06:34.259106 systemd[1]: Hostname set to . Apr 16 02:06:34.259112 systemd[1]: Initializing machine ID from VM UUID. Apr 16 02:06:34.259119 systemd[1]: Queued start job for default target initrd.target. Apr 16 02:06:34.259125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:06:34.259131 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:06:34.259137 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 02:06:34.259144 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 02:06:34.259151 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 02:06:34.259494 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 02:06:34.259503 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 02:06:34.259509 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 02:06:34.259515 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:06:34.259521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:06:34.259527 systemd[1]: Reached target paths.target - Path Units. Apr 16 02:06:34.259535 systemd[1]: Reached target slices.target - Slice Units. Apr 16 02:06:34.259541 systemd[1]: Reached target swap.target - Swaps. Apr 16 02:06:34.259547 systemd[1]: Reached target timers.target - Timer Units. Apr 16 02:06:34.259553 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 02:06:34.259559 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 02:06:34.259566 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 02:06:34.259572 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 02:06:34.259578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:06:34.259586 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 02:06:34.259592 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:06:34.259598 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 02:06:34.259604 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 02:06:34.259610 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 02:06:34.259616 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 02:06:34.259623 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 02:06:34.259629 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 02:06:34.259635 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 02:06:34.259658 systemd-journald[193]: Collecting audit messages is disabled. Apr 16 02:06:34.259674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:06:34.259681 systemd-journald[193]: Journal started Apr 16 02:06:34.259697 systemd-journald[193]: Runtime Journal (/run/log/journal/f18f3972497247e89aff4f63a7663171) is 6.0M, max 48.3M, 42.2M free. Apr 16 02:06:34.284552 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 02:06:34.294122 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 02:06:34.305693 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:06:34.328840 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 02:06:34.354492 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 02:06:34.379771 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 02:06:34.398659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 02:06:34.423605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 02:06:34.473061 systemd-modules-load[194]: Inserted module 'overlay' Apr 16 02:06:34.490704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:06:34.512474 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:06:34.592904 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 02:06:34.595964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:06:34.613801 kernel: Bridge firewalling registered Apr 16 02:06:34.601936 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 16 02:06:34.638104 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 02:06:34.663698 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 02:06:34.678561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:06:34.707506 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:06:34.735981 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 02:06:34.757533 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:06:34.783559 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 02:06:34.817755 dracut-cmdline[225]: dracut-dracut-053 Apr 16 02:06:34.830581 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 02:06:34.837991 systemd-resolved[231]: Positive Trust Anchors: Apr 16 02:06:34.837999 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 02:06:34.838024 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 02:06:34.841015 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 16 02:06:34.842420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 02:06:34.890693 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:06:35.202697 kernel: SCSI subsystem initialized Apr 16 02:06:35.225626 kernel: Loading iSCSI transport class v2.0-870. Apr 16 02:06:35.258536 kernel: iscsi: registered transport (tcp) Apr 16 02:06:35.309882 kernel: iscsi: registered transport (qla4xxx) Apr 16 02:06:35.309963 kernel: QLogic iSCSI HBA Driver Apr 16 02:06:35.396930 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 02:06:35.425795 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 02:06:35.504508 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 02:06:35.504596 kernel: device-mapper: uevent: version 1.0.3 Apr 16 02:06:35.514865 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 02:06:35.616913 kernel: raid6: avx512x4 gen() 14641 MB/s Apr 16 02:06:35.644785 kernel: raid6: avx512x2 gen() 19340 MB/s Apr 16 02:06:35.668568 kernel: raid6: avx512x1 gen() 12812 MB/s Apr 16 02:06:35.691720 kernel: raid6: avx2x4 gen() 24073 MB/s Apr 16 02:06:35.714576 kernel: raid6: avx2x2 gen() 24004 MB/s Apr 16 02:06:35.744476 kernel: raid6: avx2x1 gen() 18499 MB/s Apr 16 02:06:35.744566 kernel: raid6: using algorithm avx2x4 gen() 24073 MB/s Apr 16 02:06:35.774763 kernel: raid6: .... xor() 6470 MB/s, rmw enabled Apr 16 02:06:35.774849 kernel: raid6: using avx512x2 recovery algorithm Apr 16 02:06:35.818921 kernel: xor: automatically using best checksumming function avx Apr 16 02:06:36.276699 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 02:06:36.308118 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 02:06:36.345032 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:06:36.364072 systemd-udevd[413]: Using default interface naming scheme 'v255'. Apr 16 02:06:36.368122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:06:36.416664 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 02:06:36.465873 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Apr 16 02:06:36.545037 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 02:06:36.584837 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 02:06:36.657672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:06:36.687052 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 02:06:36.734704 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 02:06:36.743900 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 02:06:36.767516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:06:36.776670 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 02:06:36.849659 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 02:06:36.869831 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 02:06:36.869940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:06:36.899885 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 02:06:36.916748 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:06:36.917124 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:06:36.989058 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 02:06:36.989088 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 02:06:36.963451 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:06:37.013778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:06:37.065826 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 02:06:37.015086 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 02:06:37.066716 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:06:37.119004 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 02:06:37.119035 kernel: GPT:9289727 != 19775487 Apr 16 02:06:37.119046 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 02:06:37.119057 kernel: GPT:9289727 != 19775487 Apr 16 02:06:37.066806 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:06:37.164842 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 02:06:37.164867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:06:37.181898 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:06:37.316879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:06:37.383799 kernel: AVX2 version of gcm_enc/dec engaged. Apr 16 02:06:37.383883 kernel: libata version 3.00 loaded. Apr 16 02:06:37.385049 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 02:06:37.441586 kernel: AES CTR mode by8 optimization enabled Apr 16 02:06:37.441607 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Apr 16 02:06:37.414970 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 02:06:37.495654 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (464) Apr 16 02:06:37.495691 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 02:06:37.495849 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 02:06:37.495863 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 02:06:37.513664 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 02:06:37.515534 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:06:37.552710 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 02:06:37.647661 kernel: scsi host0: ahci Apr 16 02:06:37.647870 kernel: scsi host1: ahci Apr 16 02:06:37.647972 kernel: scsi host2: ahci Apr 16 02:06:37.648053 kernel: scsi host3: ahci Apr 16 02:06:37.648136 kernel: scsi host4: ahci Apr 16 02:06:37.648643 kernel: scsi host5: ahci Apr 16 02:06:37.648732 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 16 02:06:37.648743 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 16 02:06:37.648753 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 16 02:06:37.648765 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 16 02:06:37.578997 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 02:06:37.725981 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 16 02:06:37.726016 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 16 02:06:37.695586 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 02:06:37.758723 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 02:06:37.798810 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 02:06:37.836847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:06:37.836982 disk-uuid[567]: Primary Header is updated. Apr 16 02:06:37.836982 disk-uuid[567]: Secondary Entries is updated. Apr 16 02:06:37.836982 disk-uuid[567]: Secondary Header is updated. Apr 16 02:06:37.870795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:06:37.889778 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:06:38.008807 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 02:06:38.008863 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 02:06:38.028826 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 02:06:38.042834 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 02:06:38.057585 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 02:06:38.068601 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 02:06:38.083810 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 02:06:38.083856 kernel: ata3.00: applying bridge limits Apr 16 02:06:38.095598 kernel: ata3.00: configured for UDMA/100 Apr 16 02:06:38.111551 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 02:06:38.196844 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 02:06:38.197097 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 02:06:38.219073 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 02:06:38.895647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:06:38.898678 disk-uuid[569]: The operation has completed successfully. Apr 16 02:06:38.975808 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 02:06:38.976554 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 02:06:39.027915 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 02:06:39.056038 sh[594]: Success Apr 16 02:06:39.108702 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 16 02:06:39.226912 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 02:06:39.267438 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 02:06:39.277686 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 02:06:39.380605 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 02:06:39.380928 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:06:39.403885 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 02:06:39.403953 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 02:06:39.421091 kernel: BTRFS info (device dm-0): using free space tree Apr 16 02:06:39.467827 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 02:06:39.477655 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 02:06:39.522784 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 02:06:39.534636 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 02:06:39.593939 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 02:06:39.594020 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:06:39.594033 kernel: BTRFS info (device vda6): using free space tree Apr 16 02:06:39.625836 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 02:06:39.662936 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 02:06:39.690045 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 02:06:39.763519 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 02:06:39.799578 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 02:06:39.969612 ignition[705]: Ignition 2.19.0 Apr 16 02:06:39.970014 ignition[705]: Stage: fetch-offline Apr 16 02:06:39.970048 ignition[705]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:06:39.970055 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:06:39.970121 ignition[705]: parsed url from cmdline: "" Apr 16 02:06:39.970124 ignition[705]: no config URL provided Apr 16 02:06:39.970127 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 02:06:39.970132 ignition[705]: no config at "/usr/lib/ignition/user.ign" Apr 16 02:06:39.970153 ignition[705]: op(1): [started] loading QEMU firmware config module Apr 16 02:06:39.970585 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 02:06:39.996739 ignition[705]: op(1): [finished] loading QEMU firmware config module Apr 16 02:06:40.085792 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 02:06:40.121837 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 02:06:40.168698 systemd-networkd[783]: lo: Link UP Apr 16 02:06:40.168881 systemd-networkd[783]: lo: Gained carrier Apr 16 02:06:40.171766 systemd-networkd[783]: Enumeration completed Apr 16 02:06:40.175668 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 02:06:40.181540 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:06:40.181545 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 02:06:40.184567 systemd-networkd[783]: eth0: Link UP Apr 16 02:06:40.184571 systemd-networkd[783]: eth0: Gained carrier Apr 16 02:06:40.184583 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:06:40.194133 systemd[1]: Reached target network.target - Network. Apr 16 02:06:40.263715 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 02:06:41.454605 systemd-networkd[783]: eth0: Gained IPv6LL Apr 16 02:06:41.539018 ignition[705]: parsing config with SHA512: ee8c64d2305a94b38a323fdd41031af15ffa9aa87fa901aae561622772bf5405c7402cd488bf36fd571a5026137fa306bd567019ce38a3c20b5e5c0adb32678e Apr 16 02:06:41.555639 unknown[705]: fetched base config from "system" Apr 16 02:06:41.557942 unknown[705]: fetched user config from "qemu" Apr 16 02:06:41.559723 ignition[705]: fetch-offline: fetch-offline passed Apr 16 02:06:41.561994 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 02:06:41.559793 ignition[705]: Ignition finished successfully Apr 16 02:06:41.575636 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 02:06:41.608017 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 02:06:41.734734 ignition[787]: Ignition 2.19.0 Apr 16 02:06:41.734741 ignition[787]: Stage: kargs Apr 16 02:06:41.744933 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 02:06:41.735147 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:06:41.735539 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:06:41.740852 ignition[787]: kargs: kargs passed Apr 16 02:06:41.740891 ignition[787]: Ignition finished successfully Apr 16 02:06:41.812883 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 02:06:41.881740 ignition[795]: Ignition 2.19.0 Apr 16 02:06:41.881837 ignition[795]: Stage: disks Apr 16 02:06:41.882508 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:06:41.882515 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:06:41.886709 ignition[795]: disks: disks passed Apr 16 02:06:41.908120 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 02:06:41.886758 ignition[795]: Ignition finished successfully Apr 16 02:06:41.932892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 02:06:41.941910 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 02:06:41.990758 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 02:06:42.030951 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 02:06:42.047598 systemd[1]: Reached target basic.target - Basic System. Apr 16 02:06:42.091768 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 02:06:42.148917 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 16 02:06:42.155799 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 02:06:42.244660 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 02:06:42.653897 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 02:06:42.655860 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 02:06:42.658800 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 02:06:42.716898 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 02:06:42.755139 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Apr 16 02:06:42.746587 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 02:06:42.804704 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 02:06:42.804741 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:06:42.804750 kernel: BTRFS info (device vda6): using free space tree Apr 16 02:06:42.778070 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 02:06:42.778138 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 02:06:42.778537 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 02:06:42.807987 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 02:06:42.851983 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 02:06:42.937551 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 02:06:42.941667 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 02:06:42.983835 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 02:06:43.011797 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Apr 16 02:06:43.026784 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 02:06:43.041559 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 02:06:43.445841 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 02:06:43.484829 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 02:06:43.499799 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 02:06:43.542094 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 02:06:43.518048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 02:06:43.593756 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 02:06:43.673819 ignition[927]: INFO : Ignition 2.19.0 Apr 16 02:06:43.673819 ignition[927]: INFO : Stage: mount Apr 16 02:06:43.688886 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:06:43.688886 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:06:43.688886 ignition[927]: INFO : mount: mount passed Apr 16 02:06:43.688886 ignition[927]: INFO : Ignition finished successfully Apr 16 02:06:43.736901 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 02:06:43.760125 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 02:06:43.774600 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 02:06:43.815749 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Apr 16 02:06:43.815797 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 02:06:43.835931 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:06:43.835997 kernel: BTRFS info (device vda6): using free space tree Apr 16 02:06:43.868770 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 02:06:43.874104 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 02:06:43.960410 ignition[956]: INFO : Ignition 2.19.0 Apr 16 02:06:43.960410 ignition[956]: INFO : Stage: files Apr 16 02:06:43.979733 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:06:43.979733 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:06:43.979733 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 16 02:06:43.979733 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 02:06:43.979733 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 02:06:43.979733 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 02:06:43.979733 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 02:06:43.979733 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 02:06:43.977715 unknown[956]: wrote ssh authorized keys file for user: core Apr 16 02:06:44.104873 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 02:06:44.104873 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 02:06:44.148817 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 02:06:44.243135 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 02:06:44.266838 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 02:06:44.266838 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 16 02:06:44.557933 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 16 02:06:44.691777 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 02:06:44.691777 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 02:06:44.743075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 16 02:06:45.114950 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 16 02:06:45.247977 kernel: hrtimer: interrupt took 7946130 ns Apr 16 02:06:45.424445 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 02:06:45.450469 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 16 02:06:45.450469 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 02:06:45.450469 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 02:06:45.450469 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 16 02:06:45.450469 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 16 02:06:45.450469 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 02:06:45.450469 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 02:06:45.450469 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 16 02:06:45.450469 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 02:06:45.620813 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 02:06:45.620813 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 02:06:45.620813 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 02:06:45.620813 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 16 02:06:45.620813 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 02:06:45.620813 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 02:06:45.620813 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 02:06:45.620813 ignition[956]: INFO : files: files passed Apr 16 02:06:45.620813 ignition[956]: INFO : Ignition finished successfully Apr 16 02:06:45.539009 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 02:06:45.705036 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 02:06:45.725842 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 02:06:45.781486 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 02:06:45.840532 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 02:06:45.781675 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 02:06:45.868749 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:06:45.868749 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:06:45.905759 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:06:45.906880 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 02:06:45.954796 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 02:06:45.990059 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 02:06:46.064131 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 02:06:46.064738 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 02:06:46.083601 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 02:06:46.103879 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 02:06:46.129707 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 02:06:46.171599 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 02:06:46.224575 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 02:06:46.266768 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 02:06:46.304894 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:06:46.313123 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:06:46.342652 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 02:06:46.362004 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 02:06:46.362739 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 02:06:46.389621 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 02:06:46.404605 systemd[1]: Stopped target basic.target - Basic System. Apr 16 02:06:46.423734 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 02:06:46.451074 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 02:06:46.469651 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 02:06:46.493093 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 02:06:46.511110 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 02:06:46.541647 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 02:06:46.560017 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 02:06:46.585929 systemd[1]: Stopped target swap.target - Swaps. Apr 16 02:06:46.613820 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 02:06:46.613936 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 02:06:46.636735 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:06:46.646050 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:06:46.676420 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 02:06:46.682623 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:06:46.699723 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 02:06:46.700034 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 02:06:46.735741 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 02:06:46.736036 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 02:06:46.747905 systemd[1]: Stopped target paths.target - Path Units. Apr 16 02:06:46.770154 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 02:06:46.774006 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:06:46.793734 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 02:06:46.819815 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 02:06:46.841821 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 02:06:46.842491 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 02:06:46.867626 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 02:06:46.867953 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 02:06:46.880813 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 02:06:46.881006 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 02:06:46.908731 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 02:06:46.909044 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 02:06:47.056969 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 02:06:47.065430 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 02:06:47.065566 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:06:47.089493 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 02:06:47.118645 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 02:06:47.118830 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:06:47.138619 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 02:06:47.138712 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 02:06:47.181805 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 02:06:47.183937 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 02:06:47.184757 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 02:06:47.254885 ignition[1012]: INFO : Ignition 2.19.0 Apr 16 02:06:47.254885 ignition[1012]: INFO : Stage: umount Apr 16 02:06:47.254885 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:06:47.254885 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:06:47.254885 ignition[1012]: INFO : umount: umount passed Apr 16 02:06:47.254885 ignition[1012]: INFO : Ignition finished successfully Apr 16 02:06:47.256592 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 02:06:47.256673 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 02:06:47.273965 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 02:06:47.274463 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 02:06:47.293043 systemd[1]: Stopped target network.target - Network. Apr 16 02:06:47.312561 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 02:06:47.312665 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 02:06:47.334876 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 02:06:47.334936 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 02:06:47.355728 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 02:06:47.355787 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 02:06:47.374774 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 02:06:47.374832 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 02:06:47.394545 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 02:06:47.394607 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 02:06:47.416936 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 02:06:47.440738 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 02:06:47.470065 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 02:06:47.470786 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 02:06:47.504670 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 02:06:47.504741 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:06:47.537545 systemd-networkd[783]: eth0: DHCPv6 lease lost Apr 16 02:06:47.547875 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 02:06:47.548713 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 02:06:47.568118 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 02:06:47.568539 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:06:47.657123 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 02:06:47.671103 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 02:06:47.671656 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 02:06:47.695003 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 02:06:47.695712 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:06:47.762119 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 02:06:47.762451 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 02:06:47.786745 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:06:47.940658 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 02:06:47.940925 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:06:47.960481 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 02:06:47.960538 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 02:06:47.965897 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 02:06:47.965937 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:06:47.994084 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 02:06:47.994151 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 02:06:48.026523 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 02:06:48.026580 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 02:06:48.047718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 02:06:48.047773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:06:48.072879 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 02:06:48.086805 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 02:06:48.086877 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:06:48.107451 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:06:48.107502 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:06:48.131815 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 02:06:48.132027 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 02:06:48.154050 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 02:06:48.154591 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 02:06:48.174901 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 02:06:48.197628 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 02:06:48.254705 systemd[1]: Switching root. Apr 16 02:06:48.390781 systemd-journald[193]: Journal stopped Apr 16 02:06:51.339033 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 16 02:06:51.339086 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 02:06:51.339098 kernel: SELinux: policy capability open_perms=1 Apr 16 02:06:51.339107 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 02:06:51.339115 kernel: SELinux: policy capability always_check_network=0 Apr 16 02:06:51.339128 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 02:06:51.339137 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 02:06:51.339149 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 02:06:51.339442 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 02:06:51.339453 kernel: audit: type=1403 audit(1776305208.608:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 02:06:51.339463 systemd[1]: Successfully loaded SELinux policy in 106.566ms. Apr 16 02:06:51.339480 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.796ms. Apr 16 02:06:51.339490 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 02:06:51.339499 systemd[1]: Detected virtualization kvm. Apr 16 02:06:51.339509 systemd[1]: Detected architecture x86-64. Apr 16 02:06:51.339522 systemd[1]: Detected first boot. Apr 16 02:06:51.339531 systemd[1]: Initializing machine ID from VM UUID. Apr 16 02:06:51.339541 zram_generator::config[1056]: No configuration found. Apr 16 02:06:51.339552 systemd[1]: Populated /etc with preset unit settings. Apr 16 02:06:51.339560 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 02:06:51.339573 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 02:06:51.339582 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 02:06:51.339593 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 02:06:51.339604 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 02:06:51.339613 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 02:06:51.339620 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 02:06:51.339631 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 02:06:51.339642 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 02:06:51.339650 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 02:06:51.339658 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 02:06:51.339666 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:06:51.339675 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:06:51.339683 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 02:06:51.339690 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 02:06:51.339698 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 02:06:51.339706 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 02:06:51.339714 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 02:06:51.339726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:06:51.339733 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 02:06:51.339741 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 02:06:51.339751 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 02:06:51.339758 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 02:06:51.339768 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:06:51.339776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 02:06:51.339784 systemd[1]: Reached target slices.target - Slice Units. Apr 16 02:06:51.339792 systemd[1]: Reached target swap.target - Swaps. Apr 16 02:06:51.339800 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 02:06:51.339807 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 02:06:51.339816 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:06:51.339824 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 02:06:51.339832 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:06:51.339840 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 02:06:51.339848 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 02:06:51.339855 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 02:06:51.339863 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 02:06:51.339870 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:06:51.339879 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 02:06:51.339889 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 02:06:51.339896 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 02:06:51.339904 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 02:06:51.339912 systemd[1]: Reached target machines.target - Containers. Apr 16 02:06:51.339920 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 02:06:51.339928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 02:06:51.339935 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 02:06:51.339943 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 02:06:51.339952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 02:06:51.339960 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 02:06:51.339968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 02:06:51.339975 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 02:06:51.339983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 02:06:51.339991 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 02:06:51.339998 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 02:06:51.340006 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 02:06:51.340014 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 02:06:51.340023 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 02:06:51.340030 kernel: ACPI: bus type drm_connector registered Apr 16 02:06:51.340038 kernel: fuse: init (API version 7.39) Apr 16 02:06:51.340045 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 02:06:51.340053 kernel: loop: module loaded Apr 16 02:06:51.340060 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 02:06:51.340081 systemd-journald[1140]: Collecting audit messages is disabled. Apr 16 02:06:51.340098 systemd-journald[1140]: Journal started Apr 16 02:06:51.340116 systemd-journald[1140]: Runtime Journal (/run/log/journal/f18f3972497247e89aff4f63a7663171) is 6.0M, max 48.3M, 42.2M free. Apr 16 02:06:49.667860 systemd[1]: Queued start job for default target multi-user.target. Apr 16 02:06:49.773059 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 02:06:49.775108 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 02:06:49.776121 systemd[1]: systemd-journald.service: Consumed 5.106s CPU time. Apr 16 02:06:51.365826 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 02:06:51.388005 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 02:06:51.404446 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 02:06:51.427655 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 02:06:51.427728 systemd[1]: Stopped verity-setup.service. Apr 16 02:06:51.468543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:06:51.480782 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 02:06:51.491549 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 02:06:51.502820 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 02:06:51.517030 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 02:06:51.530566 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 02:06:51.543769 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 02:06:51.556564 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 02:06:51.568592 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 02:06:51.582144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:06:51.597660 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 02:06:51.598035 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 02:06:51.612045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 02:06:51.612768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 02:06:51.626052 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 02:06:51.627094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 02:06:51.639929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 02:06:51.640734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 02:06:51.654845 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 02:06:51.655829 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 02:06:51.668831 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 02:06:51.669985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 02:06:51.683094 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 02:06:51.743130 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:06:51.763780 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 02:06:51.780647 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:06:51.810820 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 02:06:51.837016 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 02:06:51.852124 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 02:06:51.863942 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 02:06:51.863972 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 02:06:51.876562 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 02:06:51.891644 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 02:06:51.905943 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 02:06:51.916930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 02:06:51.923016 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 02:06:51.938706 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 02:06:51.951476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 02:06:51.960569 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 02:06:51.971592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 02:06:51.974938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:06:51.998638 systemd-journald[1140]: Time spent on flushing to /var/log/journal/f18f3972497247e89aff4f63a7663171 is 21.629ms for 1003 entries. Apr 16 02:06:51.998638 systemd-journald[1140]: System Journal (/var/log/journal/f18f3972497247e89aff4f63a7663171) is 8.0M, max 195.6M, 187.6M free. Apr 16 02:06:52.045082 systemd-journald[1140]: Received client request to flush runtime journal. Apr 16 02:06:52.045118 kernel: loop0: detected capacity change from 0 to 142488 Apr 16 02:06:52.014613 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 02:06:52.044893 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 02:06:52.078728 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 02:06:52.104793 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 02:06:52.117144 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 02:06:52.132933 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 02:06:52.147603 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 02:06:52.162133 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 02:06:52.181769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:06:52.198089 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 02:06:52.223423 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 02:06:52.232901 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 16 02:06:52.234835 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 02:06:52.264048 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 02:06:52.283541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 02:06:52.313440 kernel: loop1: detected capacity change from 0 to 140768 Apr 16 02:06:52.344549 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 02:06:52.345861 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 02:06:52.346805 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 16 02:06:52.346813 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 16 02:06:52.362724 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:06:52.414555 kernel: loop2: detected capacity change from 0 to 217752 Apr 16 02:06:52.477528 kernel: loop3: detected capacity change from 0 to 142488 Apr 16 02:06:52.542540 kernel: loop4: detected capacity change from 0 to 140768 Apr 16 02:06:52.609765 kernel: loop5: detected capacity change from 0 to 217752 Apr 16 02:06:52.609691 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 02:06:52.638071 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 02:06:52.638649 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 16 02:06:52.639625 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:06:52.655614 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 02:06:52.655627 systemd[1]: Reloading... Apr 16 02:06:52.683745 systemd-udevd[1196]: Using default interface naming scheme 'v255'. Apr 16 02:06:52.738856 zram_generator::config[1219]: No configuration found. Apr 16 02:06:52.789067 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 02:06:52.844623 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1229) Apr 16 02:06:52.941435 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 16 02:06:52.969888 kernel: ACPI: button: Power Button [PWRF] Apr 16 02:06:52.978909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 02:06:53.057060 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 16 02:06:53.073670 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 02:06:53.083527 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 02:06:53.097429 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 02:06:53.120893 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 02:06:53.121535 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 02:06:53.125080 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 02:06:53.125869 systemd[1]: Reloading finished in 469 ms. Apr 16 02:06:53.582137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:06:53.632140 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 02:06:53.645001 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 02:06:53.746563 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 02:06:53.937077 systemd[1]: Starting ensure-sysext.service... Apr 16 02:06:53.972744 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 02:06:53.991639 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 02:06:54.010029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 02:06:54.027752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:06:54.042134 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 02:06:54.061148 systemd[1]: Reloading requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Apr 16 02:06:54.061566 systemd[1]: Reloading... Apr 16 02:06:54.123144 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 02:06:54.126831 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 02:06:54.129802 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 02:06:54.130563 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Apr 16 02:06:54.130645 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Apr 16 02:06:54.144741 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 02:06:54.144748 systemd-tmpfiles[1294]: Skipping /boot Apr 16 02:06:54.163897 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 02:06:54.164074 systemd-tmpfiles[1294]: Skipping /boot Apr 16 02:06:54.274574 zram_generator::config[1322]: No configuration found. Apr 16 02:06:54.623796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 02:06:54.681096 systemd[1]: Reloading finished in 616 ms. Apr 16 02:06:54.780133 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 02:06:54.813689 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:06:54.829650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:06:54.981122 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 02:06:54.997589 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 02:06:55.012003 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 02:06:55.030631 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 02:06:55.047565 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 02:06:55.047682 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 02:06:55.063748 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 02:06:55.088913 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 02:06:55.093770 augenrules[1390]: No rules Apr 16 02:06:55.101860 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 02:06:55.115154 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 02:06:55.129595 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 02:06:55.155977 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 02:06:55.177030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:06:55.188963 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:06:55.189078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 02:06:55.199985 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 02:06:55.215696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 02:06:55.231036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 02:06:55.244942 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 02:06:55.248928 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 02:06:55.260010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 02:06:55.263119 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 02:06:55.272963 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:06:55.274019 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 02:06:55.287098 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 02:06:55.303696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 02:06:55.304871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 02:06:55.324678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 02:06:55.324936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 02:06:55.326015 systemd-networkd[1292]: lo: Link UP Apr 16 02:06:55.326545 systemd-networkd[1292]: lo: Gained carrier Apr 16 02:06:55.328565 systemd-networkd[1292]: Enumeration completed Apr 16 02:06:55.331620 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 02:06:55.332442 systemd-networkd[1292]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:06:55.332499 systemd-networkd[1292]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 02:06:55.335480 systemd-networkd[1292]: eth0: Link UP Apr 16 02:06:55.335537 systemd-networkd[1292]: eth0: Gained carrier Apr 16 02:06:55.335590 systemd-networkd[1292]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:06:55.358636 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 02:06:55.359528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 02:06:55.378541 systemd-networkd[1292]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 02:06:55.382584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:06:55.382729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 02:06:55.391064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 02:06:55.395117 systemd-resolved[1384]: Positive Trust Anchors: Apr 16 02:06:55.395725 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 02:06:55.395779 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 02:06:55.402887 systemd-resolved[1384]: Defaulting to hostname 'linux'. Apr 16 02:06:55.420808 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 02:06:55.433611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 02:06:55.446775 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 02:06:55.456928 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 02:06:55.459101 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 02:06:55.470779 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 02:06:55.471044 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:06:55.473057 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 02:06:55.486581 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 02:06:55.500792 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 02:06:55.513081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 02:06:55.513957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 02:06:55.526848 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 02:06:55.527143 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 02:06:55.539712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 02:06:55.540002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 02:06:55.553477 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 02:06:55.553578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 02:06:55.568881 systemd[1]: Finished ensure-sysext.service. Apr 16 02:06:55.590725 systemd[1]: Reached target network.target - Network. Apr 16 02:06:55.599789 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:06:55.611618 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 02:06:55.611777 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 02:06:55.630795 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 02:06:55.691450 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 02:06:55.703596 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 02:06:56.283395 systemd-timesyncd[1427]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 02:06:56.283424 systemd-timesyncd[1427]: Initial clock synchronization to Thu 2026-04-16 02:06:56.283220 UTC. Apr 16 02:06:56.284099 systemd-resolved[1384]: Clock change detected. Flushing caches. Apr 16 02:06:56.290023 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 02:06:56.303080 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 02:06:56.316207 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 02:06:56.329765 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 02:06:56.329885 systemd[1]: Reached target paths.target - Path Units. Apr 16 02:06:56.339451 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 02:06:56.350309 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 02:06:56.360891 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 02:06:56.374340 systemd[1]: Reached target timers.target - Timer Units. Apr 16 02:06:56.386401 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 02:06:56.399479 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 02:06:56.421986 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 02:06:56.433361 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 02:06:56.445143 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 02:06:56.455190 systemd[1]: Reached target basic.target - Basic System. Apr 16 02:06:56.464460 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 02:06:56.465006 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 02:06:56.474327 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 02:06:56.487013 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 02:06:56.498974 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 02:06:56.511235 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 02:06:56.518873 jq[1433]: false Apr 16 02:06:56.521242 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 02:06:56.523473 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 02:06:56.537196 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 02:06:56.552247 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 02:06:56.563382 extend-filesystems[1434]: Found loop3 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found loop4 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found loop5 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found sr0 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found vda Apr 16 02:06:56.563382 extend-filesystems[1434]: Found vda1 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found vda2 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found vda3 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found usr Apr 16 02:06:56.563382 extend-filesystems[1434]: Found vda4 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found vda6 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found vda7 Apr 16 02:06:56.563382 extend-filesystems[1434]: Found vda9 Apr 16 02:06:56.563382 extend-filesystems[1434]: Checking size of /dev/vda9 Apr 16 02:06:56.893186 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1233) Apr 16 02:06:56.893219 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 02:06:56.571185 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 02:06:56.590238 dbus-daemon[1432]: [system] SELinux support is enabled Apr 16 02:06:56.894227 extend-filesystems[1434]: Resized partition /dev/vda9 Apr 16 02:06:56.598925 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 02:06:56.906307 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Apr 16 02:06:56.639244 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 02:06:56.641020 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 02:06:56.644902 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 02:06:56.914498 update_engine[1450]: I20260416 02:06:56.688068 1450 main.cc:92] Flatcar Update Engine starting Apr 16 02:06:56.914498 update_engine[1450]: I20260416 02:06:56.701443 1450 update_check_scheduler.cc:74] Next update check in 6m0s Apr 16 02:06:56.659055 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 02:06:56.666094 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 02:06:56.915268 tar[1454]: linux-amd64/LICENSE Apr 16 02:06:56.915268 tar[1454]: linux-amd64/helm Apr 16 02:06:56.682173 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 02:06:56.915440 jq[1452]: true Apr 16 02:06:56.683870 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 02:06:56.690179 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 02:06:56.690415 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 02:06:56.727440 systemd[1]: Started update-engine.service - Update Engine. Apr 16 02:06:56.760241 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 02:06:56.760405 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 02:06:56.820158 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 02:06:56.820177 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 02:06:56.831108 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Apr 16 02:06:56.831120 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 02:06:56.841372 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 02:06:56.841389 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 02:06:56.843895 systemd-logind[1442]: New seat seat0. Apr 16 02:06:56.870343 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 02:06:56.881338 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 02:06:56.901471 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 02:06:56.935042 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 02:06:56.975270 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 02:06:56.975270 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 02:06:56.975270 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 02:06:57.019434 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Apr 16 02:06:57.013424 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 02:06:57.042406 jq[1466]: true Apr 16 02:06:57.013898 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 02:06:57.102248 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 02:06:57.133218 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 02:06:57.190308 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 02:06:57.244256 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Apr 16 02:06:57.247313 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 02:06:57.264974 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 02:06:57.288309 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 02:06:57.818248 systemd-networkd[1292]: eth0: Gained IPv6LL Apr 16 02:06:58.011877 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 02:06:58.031863 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:33798.service - OpenSSH per-connection server daemon (10.0.0.1:33798). Apr 16 02:06:58.047302 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 02:06:58.064050 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 02:06:58.086451 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 02:06:58.114303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:06:58.140292 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 02:06:58.151060 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 02:06:58.151210 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 02:06:58.190214 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 02:06:58.310338 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 02:06:58.310492 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 02:06:58.325114 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 02:06:58.336495 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 02:06:58.415995 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 02:06:58.446348 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 02:06:58.471380 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 02:06:58.485073 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 02:06:58.523922 sshd[1505]: Accepted publickey for core from 10.0.0.1 port 33798 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:06:58.537913 sshd[1505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:06:58.644986 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 02:06:59.142402 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 02:06:59.179019 systemd-logind[1442]: New session 1 of user core. Apr 16 02:06:59.501441 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 02:06:59.535908 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 02:06:59.563077 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 02:07:00.570975 systemd[1533]: Queued start job for default target default.target. Apr 16 02:07:00.583456 systemd[1533]: Created slice app.slice - User Application Slice. Apr 16 02:07:00.583474 systemd[1533]: Reached target paths.target - Paths. Apr 16 02:07:00.583482 systemd[1533]: Reached target timers.target - Timers. Apr 16 02:07:00.586228 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 02:07:00.714904 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 02:07:00.715106 systemd[1533]: Reached target sockets.target - Sockets. Apr 16 02:07:00.715153 systemd[1533]: Reached target basic.target - Basic System. Apr 16 02:07:00.715279 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 02:07:00.719285 systemd[1533]: Reached target default.target - Main User Target. Apr 16 02:07:00.719320 systemd[1533]: Startup finished in 1.096s. Apr 16 02:07:00.745404 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 02:07:01.154139 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:40732.service - OpenSSH per-connection server daemon (10.0.0.1:40732). Apr 16 02:07:01.253974 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 40732 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:07:01.257251 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:07:01.275977 systemd-logind[1442]: New session 2 of user core. Apr 16 02:07:01.283108 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 02:07:01.355965 tar[1454]: linux-amd64/README.md Apr 16 02:07:01.397210 sshd[1549]: pam_unix(sshd:session): session closed for user core Apr 16 02:07:01.405465 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:40732.service: Deactivated successfully. Apr 16 02:07:01.413929 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 02:07:01.418210 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Apr 16 02:07:01.425208 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:40742.service - OpenSSH per-connection server daemon (10.0.0.1:40742). Apr 16 02:07:01.441226 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 02:07:01.499016 systemd-logind[1442]: Removed session 2. Apr 16 02:07:01.574877 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 40742 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:07:01.576433 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:07:01.592114 systemd-logind[1442]: New session 3 of user core. Apr 16 02:07:01.597124 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 02:07:01.682514 sshd[1558]: pam_unix(sshd:session): session closed for user core Apr 16 02:07:01.689299 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Apr 16 02:07:01.690035 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:40742.service: Deactivated successfully. Apr 16 02:07:01.691931 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 02:07:01.696386 systemd-logind[1442]: Removed session 3. Apr 16 02:07:01.708994 containerd[1463]: time="2026-04-16T02:07:01.707321539Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 02:07:02.123135 containerd[1463]: time="2026-04-16T02:07:02.121410436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 02:07:02.144816 containerd[1463]: time="2026-04-16T02:07:02.142099019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 02:07:02.144816 containerd[1463]: time="2026-04-16T02:07:02.142894804Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 02:07:02.144816 containerd[1463]: time="2026-04-16T02:07:02.142917962Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 02:07:02.144816 containerd[1463]: time="2026-04-16T02:07:02.144005587Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 02:07:02.145082 containerd[1463]: time="2026-04-16T02:07:02.145063377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 02:07:02.147475 containerd[1463]: time="2026-04-16T02:07:02.147272016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 02:07:02.147934 containerd[1463]: time="2026-04-16T02:07:02.147921853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 02:07:02.150164 containerd[1463]: time="2026-04-16T02:07:02.150093060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 02:07:02.150412 containerd[1463]: time="2026-04-16T02:07:02.150273539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 02:07:02.151260 containerd[1463]: time="2026-04-16T02:07:02.151241597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 02:07:02.151309 containerd[1463]: time="2026-04-16T02:07:02.151301718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 02:07:02.152141 containerd[1463]: time="2026-04-16T02:07:02.152122042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 02:07:02.156863 containerd[1463]: time="2026-04-16T02:07:02.156710099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 02:07:02.159319 containerd[1463]: time="2026-04-16T02:07:02.159259436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 02:07:02.159852 containerd[1463]: time="2026-04-16T02:07:02.159837628Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 02:07:02.160850 containerd[1463]: time="2026-04-16T02:07:02.160835366Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 02:07:02.161051 containerd[1463]: time="2026-04-16T02:07:02.161038827Z" level=info msg="metadata content store policy set" policy=shared Apr 16 02:07:02.238815 containerd[1463]: time="2026-04-16T02:07:02.237058581Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 02:07:02.238815 containerd[1463]: time="2026-04-16T02:07:02.237693496Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 02:07:02.238815 containerd[1463]: time="2026-04-16T02:07:02.237822177Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 02:07:02.238815 containerd[1463]: time="2026-04-16T02:07:02.237925956Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 02:07:02.238815 containerd[1463]: time="2026-04-16T02:07:02.237960964Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 02:07:02.238815 containerd[1463]: time="2026-04-16T02:07:02.238336024Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 02:07:02.243313 containerd[1463]: time="2026-04-16T02:07:02.243052477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 02:07:02.244699 containerd[1463]: time="2026-04-16T02:07:02.244416958Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 02:07:02.245115 containerd[1463]: time="2026-04-16T02:07:02.245097872Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 02:07:02.245360 containerd[1463]: time="2026-04-16T02:07:02.245180316Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 02:07:02.245851 containerd[1463]: time="2026-04-16T02:07:02.245477601Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 02:07:02.246260 containerd[1463]: time="2026-04-16T02:07:02.245989963Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 02:07:02.246260 containerd[1463]: time="2026-04-16T02:07:02.246089400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 02:07:02.246260 containerd[1463]: time="2026-04-16T02:07:02.246104847Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 02:07:02.246260 containerd[1463]: time="2026-04-16T02:07:02.246116307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 02:07:02.246260 containerd[1463]: time="2026-04-16T02:07:02.246125566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 02:07:02.246260 containerd[1463]: time="2026-04-16T02:07:02.246135357Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 02:07:02.246260 containerd[1463]: time="2026-04-16T02:07:02.246235209Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 02:07:02.246360 containerd[1463]: time="2026-04-16T02:07:02.246338945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.246360 containerd[1463]: time="2026-04-16T02:07:02.246351113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.247187 containerd[1463]: time="2026-04-16T02:07:02.246449477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.247187 containerd[1463]: time="2026-04-16T02:07:02.246462253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.247187 containerd[1463]: time="2026-04-16T02:07:02.246488271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.250336 containerd[1463]: time="2026-04-16T02:07:02.249463284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251007 containerd[1463]: time="2026-04-16T02:07:02.250253938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251007 containerd[1463]: time="2026-04-16T02:07:02.250822324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251007 containerd[1463]: time="2026-04-16T02:07:02.250953018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251007 containerd[1463]: time="2026-04-16T02:07:02.251000327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251077 containerd[1463]: time="2026-04-16T02:07:02.251011337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251077 containerd[1463]: time="2026-04-16T02:07:02.251020656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251077 containerd[1463]: time="2026-04-16T02:07:02.251037777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251113 containerd[1463]: time="2026-04-16T02:07:02.251089309Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 02:07:02.251806 containerd[1463]: time="2026-04-16T02:07:02.251268179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251806 containerd[1463]: time="2026-04-16T02:07:02.251369125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.251806 containerd[1463]: time="2026-04-16T02:07:02.251378052Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 02:07:02.252016 containerd[1463]: time="2026-04-16T02:07:02.252000041Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 02:07:02.252332 containerd[1463]: time="2026-04-16T02:07:02.252311656Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 02:07:02.252392 containerd[1463]: time="2026-04-16T02:07:02.252379657Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 02:07:02.252452 containerd[1463]: time="2026-04-16T02:07:02.252439702Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 02:07:02.252493 containerd[1463]: time="2026-04-16T02:07:02.252484296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.255120 containerd[1463]: time="2026-04-16T02:07:02.254814780Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 02:07:02.255223 containerd[1463]: time="2026-04-16T02:07:02.255143598Z" level=info msg="NRI interface is disabled by configuration." Apr 16 02:07:02.255223 containerd[1463]: time="2026-04-16T02:07:02.255188346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 02:07:02.261030 containerd[1463]: time="2026-04-16T02:07:02.260230771Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 02:07:02.268007 containerd[1463]: time="2026-04-16T02:07:02.261306467Z" level=info msg="Connect containerd service" Apr 16 02:07:02.268007 containerd[1463]: time="2026-04-16T02:07:02.261463351Z" level=info msg="using legacy CRI server" Apr 16 02:07:02.268007 containerd[1463]: time="2026-04-16T02:07:02.261476384Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 02:07:02.268007 containerd[1463]: time="2026-04-16T02:07:02.264335132Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 02:07:02.276018 containerd[1463]: time="2026-04-16T02:07:02.275226479Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 02:07:02.287985 containerd[1463]: time="2026-04-16T02:07:02.283377034Z" level=info msg="Start subscribing containerd event" Apr 16 02:07:02.288358 containerd[1463]: time="2026-04-16T02:07:02.288305338Z" level=info msg="Start recovering state" Apr 16 02:07:02.288710 containerd[1463]: time="2026-04-16T02:07:02.283808945Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 02:07:02.288965 containerd[1463]: time="2026-04-16T02:07:02.288952852Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 02:07:02.289298 containerd[1463]: time="2026-04-16T02:07:02.289280279Z" level=info msg="Start event monitor" Apr 16 02:07:02.289459 containerd[1463]: time="2026-04-16T02:07:02.289448653Z" level=info msg="Start snapshots syncer" Apr 16 02:07:02.290853 containerd[1463]: time="2026-04-16T02:07:02.290096671Z" level=info msg="Start cni network conf syncer for default" Apr 16 02:07:02.291901 containerd[1463]: time="2026-04-16T02:07:02.290889989Z" level=info msg="Start streaming server" Apr 16 02:07:02.291852 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 02:07:02.302089 containerd[1463]: time="2026-04-16T02:07:02.302019200Z" level=info msg="containerd successfully booted in 0.602612s" Apr 16 02:07:08.152361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:07:08.154034 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 02:07:08.154325 systemd[1]: Startup finished in 4.976s (kernel) + 15.344s (initrd) + 19.075s (userspace) = 39.395s. Apr 16 02:07:08.222219 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:07:12.523914 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1167202804 wd_nsec: 1167202414 Apr 16 02:07:12.551368 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:53628.service - OpenSSH per-connection server daemon (10.0.0.1:53628). Apr 16 02:07:12.699260 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 53628 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:07:12.711303 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:07:12.727911 systemd-logind[1442]: New session 4 of user core. Apr 16 02:07:12.741329 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 02:07:12.894976 sshd[1587]: pam_unix(sshd:session): session closed for user core Apr 16 02:07:12.904318 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:53628.service: Deactivated successfully. Apr 16 02:07:12.907245 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 02:07:12.909278 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Apr 16 02:07:12.919223 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:53630.service - OpenSSH per-connection server daemon (10.0.0.1:53630). Apr 16 02:07:12.921327 systemd-logind[1442]: Removed session 4. Apr 16 02:07:13.045332 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 53630 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:07:13.055292 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:07:13.088956 systemd-logind[1442]: New session 5 of user core. Apr 16 02:07:13.103257 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 02:07:13.215360 sshd[1595]: pam_unix(sshd:session): session closed for user core Apr 16 02:07:13.232277 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:53630.service: Deactivated successfully. Apr 16 02:07:13.237266 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 02:07:13.240183 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Apr 16 02:07:13.249998 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:53636.service - OpenSSH per-connection server daemon (10.0.0.1:53636). Apr 16 02:07:13.252755 systemd-logind[1442]: Removed session 5. Apr 16 02:07:13.377341 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 53636 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:07:13.390107 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:07:13.405034 systemd-logind[1442]: New session 6 of user core. Apr 16 02:07:13.413043 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 02:07:13.527332 sshd[1602]: pam_unix(sshd:session): session closed for user core Apr 16 02:07:13.539498 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:53636.service: Deactivated successfully. Apr 16 02:07:13.542995 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 02:07:13.549761 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Apr 16 02:07:13.562249 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:53642.service - OpenSSH per-connection server daemon (10.0.0.1:53642). Apr 16 02:07:13.568313 systemd-logind[1442]: Removed session 6. Apr 16 02:07:13.730973 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 53642 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:07:13.733132 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:07:13.747387 systemd-logind[1442]: New session 7 of user core. Apr 16 02:07:13.756313 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 02:07:13.862401 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 02:07:13.863317 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:07:13.926085 sudo[1612]: pam_unix(sudo:session): session closed for user root Apr 16 02:07:13.929046 sshd[1609]: pam_unix(sshd:session): session closed for user core Apr 16 02:07:13.940930 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:53642.service: Deactivated successfully. Apr 16 02:07:13.943067 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 02:07:13.945401 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Apr 16 02:07:13.948046 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:53658.service - OpenSSH per-connection server daemon (10.0.0.1:53658). Apr 16 02:07:13.949223 systemd-logind[1442]: Removed session 7. Apr 16 02:07:13.951441 kubelet[1579]: E0416 02:07:13.951068 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:07:13.953974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:07:13.954108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:07:13.954338 systemd[1]: kubelet.service: Consumed 14.197s CPU time. Apr 16 02:07:14.036094 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 53658 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:07:14.038459 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:07:14.044753 systemd-logind[1442]: New session 8 of user core. Apr 16 02:07:14.064734 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 02:07:14.280833 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 02:07:14.285843 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:07:14.308699 sudo[1622]: pam_unix(sudo:session): session closed for user root Apr 16 02:07:14.365022 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 02:07:14.366460 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:07:14.425028 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 02:07:14.428026 auditctl[1625]: No rules Apr 16 02:07:14.428303 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 02:07:14.428651 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 02:07:14.436998 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 02:07:14.517688 augenrules[1643]: No rules Apr 16 02:07:14.520367 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 02:07:14.522449 sudo[1621]: pam_unix(sudo:session): session closed for user root Apr 16 02:07:14.528336 sshd[1617]: pam_unix(sshd:session): session closed for user core Apr 16 02:07:14.539333 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:53658.service: Deactivated successfully. Apr 16 02:07:14.543693 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 02:07:14.546230 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Apr 16 02:07:14.564028 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:53664.service - OpenSSH per-connection server daemon (10.0.0.1:53664). Apr 16 02:07:14.567706 systemd-logind[1442]: Removed session 8. Apr 16 02:07:14.606677 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 53664 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:07:14.608969 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:07:14.617458 systemd-logind[1442]: New session 9 of user core. Apr 16 02:07:14.623854 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 02:07:14.681723 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 02:07:14.682009 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:07:17.978267 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 02:07:17.978785 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 02:07:21.439418 dockerd[1673]: time="2026-04-16T02:07:21.439103310Z" level=info msg="Starting up" Apr 16 02:07:22.388283 dockerd[1673]: time="2026-04-16T02:07:22.388138404Z" level=info msg="Loading containers: start." Apr 16 02:07:22.846001 kernel: Initializing XFRM netlink socket Apr 16 02:07:23.103064 systemd-networkd[1292]: docker0: Link UP Apr 16 02:07:23.250673 dockerd[1673]: time="2026-04-16T02:07:23.250294361Z" level=info msg="Loading containers: done." Apr 16 02:07:23.354942 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2082005044-merged.mount: Deactivated successfully. Apr 16 02:07:23.358476 dockerd[1673]: time="2026-04-16T02:07:23.358178823Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 02:07:23.359064 dockerd[1673]: time="2026-04-16T02:07:23.358946022Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 02:07:23.359420 dockerd[1673]: time="2026-04-16T02:07:23.359307039Z" level=info msg="Daemon has completed initialization" Apr 16 02:07:23.528030 dockerd[1673]: time="2026-04-16T02:07:23.527628826Z" level=info msg="API listen on /run/docker.sock" Apr 16 02:07:23.528666 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 02:07:24.205301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 02:07:24.662982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:07:25.243996 containerd[1463]: time="2026-04-16T02:07:25.243464561Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 16 02:07:26.147699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:07:26.255310 (kubelet)[1828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:07:26.445444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161448077.mount: Deactivated successfully. Apr 16 02:07:26.782086 kubelet[1828]: E0416 02:07:26.781758 1828 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:07:26.789374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:07:26.789742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:07:26.790298 systemd[1]: kubelet.service: Consumed 1.823s CPU time. Apr 16 02:07:31.023014 containerd[1463]: time="2026-04-16T02:07:31.022693004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:31.023760 containerd[1463]: time="2026-04-16T02:07:31.023415047Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 16 02:07:31.026351 containerd[1463]: time="2026-04-16T02:07:31.026291463Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:31.033631 containerd[1463]: time="2026-04-16T02:07:31.033491117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:31.036629 containerd[1463]: time="2026-04-16T02:07:31.036384429Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 5.792587438s" Apr 16 02:07:31.036629 containerd[1463]: time="2026-04-16T02:07:31.036611820Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 16 02:07:31.049220 containerd[1463]: time="2026-04-16T02:07:31.048989874Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 16 02:07:35.444408 containerd[1463]: time="2026-04-16T02:07:35.443967363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:35.447325 containerd[1463]: time="2026-04-16T02:07:35.447096047Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 16 02:07:35.459959 containerd[1463]: time="2026-04-16T02:07:35.459846801Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:35.482440 containerd[1463]: time="2026-04-16T02:07:35.482219019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:35.485939 containerd[1463]: time="2026-04-16T02:07:35.485730880Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 4.436597637s" Apr 16 02:07:35.486070 containerd[1463]: time="2026-04-16T02:07:35.485974468Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 16 02:07:35.491477 containerd[1463]: time="2026-04-16T02:07:35.491290658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 16 02:07:36.847427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 02:07:36.865705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:07:37.222912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:07:37.247218 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:07:37.626339 kubelet[1908]: E0416 02:07:37.626294 1908 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:07:37.631386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:07:37.631678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:07:38.314512 containerd[1463]: time="2026-04-16T02:07:38.314260145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:38.316967 containerd[1463]: time="2026-04-16T02:07:38.316850753Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 16 02:07:38.319908 containerd[1463]: time="2026-04-16T02:07:38.319678094Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:38.325117 containerd[1463]: time="2026-04-16T02:07:38.325015873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:38.329332 containerd[1463]: time="2026-04-16T02:07:38.329069855Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 2.837623582s" Apr 16 02:07:38.329332 containerd[1463]: time="2026-04-16T02:07:38.329254399Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 16 02:07:38.333196 containerd[1463]: time="2026-04-16T02:07:38.333128956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 16 02:07:41.779155 update_engine[1450]: I20260416 02:07:41.778760 1450 update_attempter.cc:509] Updating boot flags... Apr 16 02:07:41.896000 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1929) Apr 16 02:07:42.027051 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1929) Apr 16 02:07:42.110928 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1929) Apr 16 02:07:43.888180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3674506946.mount: Deactivated successfully. Apr 16 02:07:45.150403 containerd[1463]: time="2026-04-16T02:07:45.150271008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:45.152395 containerd[1463]: time="2026-04-16T02:07:45.150648469Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 16 02:07:45.154956 containerd[1463]: time="2026-04-16T02:07:45.154827801Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:45.159009 containerd[1463]: time="2026-04-16T02:07:45.158865895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:45.159755 containerd[1463]: time="2026-04-16T02:07:45.159654114Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 6.826421457s" Apr 16 02:07:45.159821 containerd[1463]: time="2026-04-16T02:07:45.159761736Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 16 02:07:45.165126 containerd[1463]: time="2026-04-16T02:07:45.164955984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 16 02:07:46.690043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3253453017.mount: Deactivated successfully. Apr 16 02:07:47.845327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 02:07:47.866791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:07:48.090314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:07:48.091030 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:07:48.418423 kubelet[1973]: E0416 02:07:48.418174 1973 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:07:48.422456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:07:48.422780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:07:51.762047 containerd[1463]: time="2026-04-16T02:07:51.761457243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:51.763190 containerd[1463]: time="2026-04-16T02:07:51.762664113Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 16 02:07:51.826385 containerd[1463]: time="2026-04-16T02:07:51.826167956Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:51.885951 containerd[1463]: time="2026-04-16T02:07:51.885432800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:51.898607 containerd[1463]: time="2026-04-16T02:07:51.898462616Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 6.733365515s" Apr 16 02:07:51.898808 containerd[1463]: time="2026-04-16T02:07:51.898762036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 16 02:07:51.901500 containerd[1463]: time="2026-04-16T02:07:51.901466799Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 02:07:53.148228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169322590.mount: Deactivated successfully. Apr 16 02:07:53.174017 containerd[1463]: time="2026-04-16T02:07:53.173708171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:53.174494 containerd[1463]: time="2026-04-16T02:07:53.174262662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 16 02:07:53.179980 containerd[1463]: time="2026-04-16T02:07:53.179854571Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:53.208467 containerd[1463]: time="2026-04-16T02:07:53.208235247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:53.213458 containerd[1463]: time="2026-04-16T02:07:53.213248327Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.311729784s" Apr 16 02:07:53.213458 containerd[1463]: time="2026-04-16T02:07:53.213368776Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 02:07:53.217916 containerd[1463]: time="2026-04-16T02:07:53.217788455Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 16 02:07:54.107343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151570581.mount: Deactivated successfully. Apr 16 02:07:56.936941 containerd[1463]: time="2026-04-16T02:07:56.936467802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:56.939263 containerd[1463]: time="2026-04-16T02:07:56.938487404Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 16 02:07:56.946750 containerd[1463]: time="2026-04-16T02:07:56.946415730Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:56.963929 containerd[1463]: time="2026-04-16T02:07:56.963773582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:07:56.967227 containerd[1463]: time="2026-04-16T02:07:56.967149531Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 3.749311849s" Apr 16 02:07:56.967227 containerd[1463]: time="2026-04-16T02:07:56.967189957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 16 02:07:58.538834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 02:07:58.554415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:07:58.569863 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 02:07:58.569966 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 02:07:58.570197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:07:58.583008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:07:58.619416 systemd[1]: Reloading requested from client PID 2114 ('systemctl') (unit session-9.scope)... Apr 16 02:07:58.619484 systemd[1]: Reloading... Apr 16 02:07:58.715638 zram_generator::config[2153]: No configuration found. Apr 16 02:07:58.826283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 02:07:58.898096 systemd[1]: Reloading finished in 278 ms. Apr 16 02:07:58.963015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:07:58.966256 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:07:58.972968 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 02:07:58.973261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:07:58.976095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:07:59.163039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:07:59.169422 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 02:07:59.260444 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 02:07:59.535814 kubelet[2203]: I0416 02:07:59.535353 2203 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 02:07:59.535814 kubelet[2203]: I0416 02:07:59.535630 2203 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 02:07:59.536017 kubelet[2203]: I0416 02:07:59.535929 2203 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 02:07:59.536017 kubelet[2203]: I0416 02:07:59.535937 2203 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 02:07:59.536368 kubelet[2203]: I0416 02:07:59.536271 2203 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 02:07:59.622400 kubelet[2203]: E0416 02:07:59.622098 2203 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:07:59.622400 kubelet[2203]: I0416 02:07:59.622165 2203 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 02:07:59.641141 kubelet[2203]: E0416 02:07:59.640981 2203 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 02:07:59.641319 kubelet[2203]: I0416 02:07:59.641185 2203 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 02:07:59.663266 kubelet[2203]: I0416 02:07:59.662944 2203 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 02:07:59.665322 kubelet[2203]: I0416 02:07:59.665144 2203 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 02:07:59.665956 kubelet[2203]: I0416 02:07:59.665251 2203 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 02:07:59.666270 kubelet[2203]: I0416 02:07:59.666115 2203 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 02:07:59.666270 kubelet[2203]: I0416 02:07:59.666129 2203 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 02:07:59.666921 kubelet[2203]: I0416 02:07:59.666793 2203 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 02:07:59.672333 kubelet[2203]: I0416 02:07:59.672167 2203 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 02:07:59.673638 kubelet[2203]: I0416 02:07:59.673507 2203 kubelet.go:482] "Attempting to sync node with API server" Apr 16 02:07:59.673902 kubelet[2203]: I0416 02:07:59.673785 2203 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 02:07:59.674275 kubelet[2203]: I0416 02:07:59.674255 2203 kubelet.go:394] "Adding apiserver pod source" Apr 16 02:07:59.674406 kubelet[2203]: I0416 02:07:59.674372 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 02:07:59.683216 kubelet[2203]: I0416 02:07:59.683086 2203 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 02:07:59.701607 kubelet[2203]: I0416 02:07:59.699891 2203 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 02:07:59.701607 kubelet[2203]: I0416 02:07:59.699926 2203 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 02:07:59.701607 kubelet[2203]: W0416 02:07:59.700364 2203 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 02:07:59.706146 kubelet[2203]: I0416 02:07:59.706087 2203 server.go:1257] "Started kubelet" Apr 16 02:07:59.707163 kubelet[2203]: I0416 02:07:59.706608 2203 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 02:07:59.708812 kubelet[2203]: I0416 02:07:59.708634 2203 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 02:07:59.709984 kubelet[2203]: I0416 02:07:59.709873 2203 server.go:317] "Adding debug handlers to kubelet server" Apr 16 02:07:59.712114 kubelet[2203]: I0416 02:07:59.711635 2203 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 02:07:59.712218 kubelet[2203]: I0416 02:07:59.712164 2203 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 02:07:59.714862 kubelet[2203]: I0416 02:07:59.712428 2203 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 02:07:59.714862 kubelet[2203]: I0416 02:07:59.713458 2203 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 02:07:59.714862 kubelet[2203]: E0416 02:07:59.712082 2203 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b43d9985d8a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:07:59.705929895 +0000 UTC m=+0.530270047,LastTimestamp:2026-04-16 02:07:59.705929895 +0000 UTC m=+0.530270047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:07:59.714862 kubelet[2203]: E0416 02:07:59.714338 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:07:59.714862 kubelet[2203]: I0416 02:07:59.714643 2203 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 02:07:59.715217 kubelet[2203]: I0416 02:07:59.715204 2203 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 02:07:59.715856 kubelet[2203]: I0416 02:07:59.715734 2203 reconciler.go:29] "Reconciler: start to sync state" Apr 16 02:07:59.718975 kubelet[2203]: I0416 02:07:59.718915 2203 factory.go:223] Registration of the systemd container factory successfully Apr 16 02:07:59.719071 kubelet[2203]: E0416 02:07:59.718945 2203 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Apr 16 02:07:59.719217 kubelet[2203]: I0416 02:07:59.719123 2203 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 02:07:59.721597 kubelet[2203]: E0416 02:07:59.721469 2203 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 02:07:59.721981 kubelet[2203]: I0416 02:07:59.721899 2203 factory.go:223] Registration of the containerd container factory successfully Apr 16 02:07:59.749807 kubelet[2203]: I0416 02:07:59.749786 2203 cpu_manager.go:225] "Starting" policy="none" Apr 16 02:07:59.750462 kubelet[2203]: I0416 02:07:59.750077 2203 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 02:07:59.750462 kubelet[2203]: I0416 02:07:59.750136 2203 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 02:07:59.753011 kubelet[2203]: I0416 02:07:59.752998 2203 policy_none.go:50] "Start" Apr 16 02:07:59.753174 kubelet[2203]: I0416 02:07:59.753167 2203 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 02:07:59.753376 kubelet[2203]: I0416 02:07:59.753368 2203 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 02:07:59.758295 kubelet[2203]: I0416 02:07:59.758268 2203 policy_none.go:44] "Start" Apr 16 02:07:59.767050 kubelet[2203]: I0416 02:07:59.766745 2203 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 02:07:59.770786 kubelet[2203]: I0416 02:07:59.770464 2203 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 02:07:59.770964 kubelet[2203]: I0416 02:07:59.770902 2203 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 02:07:59.772294 kubelet[2203]: I0416 02:07:59.771495 2203 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 02:07:59.773852 kubelet[2203]: E0416 02:07:59.773748 2203 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 02:07:59.780884 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 02:07:59.809223 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 02:07:59.815447 kubelet[2203]: E0416 02:07:59.815271 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:07:59.819834 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 02:07:59.842833 kubelet[2203]: E0416 02:07:59.842511 2203 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 02:07:59.843256 kubelet[2203]: I0416 02:07:59.843074 2203 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 02:07:59.843256 kubelet[2203]: I0416 02:07:59.843200 2203 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 02:07:59.844075 kubelet[2203]: I0416 02:07:59.843936 2203 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 02:07:59.847367 kubelet[2203]: E0416 02:07:59.847320 2203 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 02:07:59.847367 kubelet[2203]: E0416 02:07:59.847344 2203 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:07:59.911660 systemd[1]: Created slice kubepods-burstable-podd42913f7f16752bf7aea774413726d2e.slice - libcontainer container kubepods-burstable-podd42913f7f16752bf7aea774413726d2e.slice. Apr 16 02:07:59.919051 kubelet[2203]: I0416 02:07:59.918868 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:07:59.919252 kubelet[2203]: I0416 02:07:59.919094 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:07:59.919252 kubelet[2203]: I0416 02:07:59.919111 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:07:59.919252 kubelet[2203]: I0416 02:07:59.919124 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 16 02:07:59.919252 kubelet[2203]: I0416 02:07:59.919135 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:07:59.919252 kubelet[2203]: I0416 02:07:59.919149 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:07:59.919424 kubelet[2203]: I0416 02:07:59.919160 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:07:59.919424 kubelet[2203]: I0416 02:07:59.919170 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:07:59.919424 kubelet[2203]: I0416 02:07:59.919183 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:07:59.919916 kubelet[2203]: E0416 02:07:59.919846 2203 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Apr 16 02:07:59.929257 kubelet[2203]: E0416 02:07:59.929066 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:07:59.933763 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 16 02:07:59.943444 kubelet[2203]: E0416 02:07:59.943307 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:07:59.952951 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 16 02:07:59.957658 kubelet[2203]: I0416 02:07:59.957319 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 02:07:59.957658 kubelet[2203]: E0416 02:07:59.957343 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:07:59.958018 kubelet[2203]: E0416 02:07:59.957970 2203 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 02:08:00.166461 kubelet[2203]: I0416 02:08:00.166173 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 02:08:00.166978 kubelet[2203]: E0416 02:08:00.166795 2203 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 02:08:00.237727 kubelet[2203]: E0416 02:08:00.237356 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:00.242055 containerd[1463]: time="2026-04-16T02:08:00.241874680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d42913f7f16752bf7aea774413726d2e,Namespace:kube-system,Attempt:0,}" Apr 16 02:08:00.249263 kubelet[2203]: E0416 02:08:00.249002 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:00.252328 containerd[1463]: time="2026-04-16T02:08:00.252238550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 16 02:08:00.265098 kubelet[2203]: E0416 02:08:00.264834 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:00.267288 containerd[1463]: time="2026-04-16T02:08:00.266916423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 16 02:08:00.351360 kubelet[2203]: E0416 02:08:00.351098 2203 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Apr 16 02:08:00.571280 kubelet[2203]: I0416 02:08:00.571126 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 02:08:00.571832 kubelet[2203]: E0416 02:08:00.571772 2203 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 02:08:00.813327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1137400114.mount: Deactivated successfully. Apr 16 02:08:00.834269 containerd[1463]: time="2026-04-16T02:08:00.833999700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:08:00.839384 containerd[1463]: time="2026-04-16T02:08:00.839328387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 02:08:00.841345 containerd[1463]: time="2026-04-16T02:08:00.841204045Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:08:00.842449 containerd[1463]: time="2026-04-16T02:08:00.842355807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 02:08:00.844192 containerd[1463]: time="2026-04-16T02:08:00.844072338Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:08:00.846062 containerd[1463]: time="2026-04-16T02:08:00.845981269Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:08:00.847337 containerd[1463]: time="2026-04-16T02:08:00.847101227Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 16 02:08:00.849050 containerd[1463]: time="2026-04-16T02:08:00.848948758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:08:00.854169 containerd[1463]: time="2026-04-16T02:08:00.853977778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.958169ms" Apr 16 02:08:00.855759 containerd[1463]: time="2026-04-16T02:08:00.855734050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.599654ms" Apr 16 02:08:00.863948 containerd[1463]: time="2026-04-16T02:08:00.863460303Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.110597ms" Apr 16 02:08:01.011743 containerd[1463]: time="2026-04-16T02:08:01.011039466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 02:08:01.011743 containerd[1463]: time="2026-04-16T02:08:01.011078872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 02:08:01.011743 containerd[1463]: time="2026-04-16T02:08:01.011088199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:01.011743 containerd[1463]: time="2026-04-16T02:08:01.011140565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:01.011743 containerd[1463]: time="2026-04-16T02:08:01.011024650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 02:08:01.011743 containerd[1463]: time="2026-04-16T02:08:01.011065323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 02:08:01.011743 containerd[1463]: time="2026-04-16T02:08:01.011076943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:01.011743 containerd[1463]: time="2026-04-16T02:08:01.011429484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:01.038111 containerd[1463]: time="2026-04-16T02:08:01.036933844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 02:08:01.041753 containerd[1463]: time="2026-04-16T02:08:01.038126541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 02:08:01.041753 containerd[1463]: time="2026-04-16T02:08:01.038139917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:01.041753 containerd[1463]: time="2026-04-16T02:08:01.038396628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:01.074297 systemd[1]: Started cri-containerd-0d30f7fce8b05b9b78a652f08b144f8fca8bb9ba0e7f3e7570458a30a0a1ed83.scope - libcontainer container 0d30f7fce8b05b9b78a652f08b144f8fca8bb9ba0e7f3e7570458a30a0a1ed83. Apr 16 02:08:01.076343 systemd[1]: Started cri-containerd-e0ad3578f3c965ac3f90b5b3c36b02e737608c2c8190616d433caab145a36cbd.scope - libcontainer container e0ad3578f3c965ac3f90b5b3c36b02e737608c2c8190616d433caab145a36cbd. Apr 16 02:08:01.081054 systemd[1]: Started cri-containerd-9df8357539e6a615c4265a4610900c9a954b7f8519b688c0045238dbb05221c6.scope - libcontainer container 9df8357539e6a615c4265a4610900c9a954b7f8519b688c0045238dbb05221c6. Apr 16 02:08:01.155645 kubelet[2203]: E0416 02:08:01.153963 2203 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Apr 16 02:08:01.166277 containerd[1463]: time="2026-04-16T02:08:01.166180998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d30f7fce8b05b9b78a652f08b144f8fca8bb9ba0e7f3e7570458a30a0a1ed83\"" Apr 16 02:08:01.168839 kubelet[2203]: E0416 02:08:01.168586 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:01.177041 containerd[1463]: time="2026-04-16T02:08:01.176992040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d42913f7f16752bf7aea774413726d2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9df8357539e6a615c4265a4610900c9a954b7f8519b688c0045238dbb05221c6\"" Apr 16 02:08:01.186055 containerd[1463]: time="2026-04-16T02:08:01.185801812Z" level=info msg="CreateContainer within sandbox \"0d30f7fce8b05b9b78a652f08b144f8fca8bb9ba0e7f3e7570458a30a0a1ed83\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 02:08:01.186111 kubelet[2203]: E0416 02:08:01.185970 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:01.198436 containerd[1463]: time="2026-04-16T02:08:01.198199454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0ad3578f3c965ac3f90b5b3c36b02e737608c2c8190616d433caab145a36cbd\"" Apr 16 02:08:01.205586 containerd[1463]: time="2026-04-16T02:08:01.205320786Z" level=info msg="CreateContainer within sandbox \"9df8357539e6a615c4265a4610900c9a954b7f8519b688c0045238dbb05221c6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 02:08:01.206766 kubelet[2203]: E0416 02:08:01.206439 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:01.226415 containerd[1463]: time="2026-04-16T02:08:01.226294688Z" level=info msg="CreateContainer within sandbox \"e0ad3578f3c965ac3f90b5b3c36b02e737608c2c8190616d433caab145a36cbd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 02:08:01.233348 containerd[1463]: time="2026-04-16T02:08:01.233320656Z" level=info msg="CreateContainer within sandbox \"0d30f7fce8b05b9b78a652f08b144f8fca8bb9ba0e7f3e7570458a30a0a1ed83\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de0f20d1dc27e4c205311f9595f2ec3d0f556593a9e96130bba259db04dae8e5\"" Apr 16 02:08:01.235013 containerd[1463]: time="2026-04-16T02:08:01.234970303Z" level=info msg="StartContainer for \"de0f20d1dc27e4c205311f9595f2ec3d0f556593a9e96130bba259db04dae8e5\"" Apr 16 02:08:01.247045 containerd[1463]: time="2026-04-16T02:08:01.246816892Z" level=info msg="CreateContainer within sandbox \"9df8357539e6a615c4265a4610900c9a954b7f8519b688c0045238dbb05221c6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a8cef4aeb97334b9f205e8144e7171cfd534293ad810fca76cfcd8ae9d2bb71\"" Apr 16 02:08:01.259395 containerd[1463]: time="2026-04-16T02:08:01.259257446Z" level=info msg="StartContainer for \"9a8cef4aeb97334b9f205e8144e7171cfd534293ad810fca76cfcd8ae9d2bb71\"" Apr 16 02:08:01.281880 containerd[1463]: time="2026-04-16T02:08:01.281439631Z" level=info msg="CreateContainer within sandbox \"e0ad3578f3c965ac3f90b5b3c36b02e737608c2c8190616d433caab145a36cbd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e7da1fd14954ed9e816a9b50b42e2776a98b9fd14b31cb7243e62d9970a788e\"" Apr 16 02:08:01.330404 containerd[1463]: time="2026-04-16T02:08:01.330226516Z" level=info msg="StartContainer for \"2e7da1fd14954ed9e816a9b50b42e2776a98b9fd14b31cb7243e62d9970a788e\"" Apr 16 02:08:01.347211 systemd[1]: Started cri-containerd-de0f20d1dc27e4c205311f9595f2ec3d0f556593a9e96130bba259db04dae8e5.scope - libcontainer container de0f20d1dc27e4c205311f9595f2ec3d0f556593a9e96130bba259db04dae8e5. Apr 16 02:08:01.377292 kubelet[2203]: I0416 02:08:01.376989 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 02:08:01.378192 kubelet[2203]: E0416 02:08:01.378018 2203 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 16 02:08:01.377312 systemd[1]: Started cri-containerd-9a8cef4aeb97334b9f205e8144e7171cfd534293ad810fca76cfcd8ae9d2bb71.scope - libcontainer container 9a8cef4aeb97334b9f205e8144e7171cfd534293ad810fca76cfcd8ae9d2bb71. Apr 16 02:08:01.414446 systemd[1]: Started cri-containerd-2e7da1fd14954ed9e816a9b50b42e2776a98b9fd14b31cb7243e62d9970a788e.scope - libcontainer container 2e7da1fd14954ed9e816a9b50b42e2776a98b9fd14b31cb7243e62d9970a788e. Apr 16 02:08:01.504420 containerd[1463]: time="2026-04-16T02:08:01.503830465Z" level=info msg="StartContainer for \"de0f20d1dc27e4c205311f9595f2ec3d0f556593a9e96130bba259db04dae8e5\" returns successfully" Apr 16 02:08:01.514859 containerd[1463]: time="2026-04-16T02:08:01.514386272Z" level=info msg="StartContainer for \"9a8cef4aeb97334b9f205e8144e7171cfd534293ad810fca76cfcd8ae9d2bb71\" returns successfully" Apr 16 02:08:01.576670 containerd[1463]: time="2026-04-16T02:08:01.575168163Z" level=info msg="StartContainer for \"2e7da1fd14954ed9e816a9b50b42e2776a98b9fd14b31cb7243e62d9970a788e\" returns successfully" Apr 16 02:08:01.780043 kubelet[2203]: E0416 02:08:01.777818 2203 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:08:01.835295 kubelet[2203]: E0416 02:08:01.835194 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:08:01.835481 kubelet[2203]: E0416 02:08:01.835407 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:01.841641 kubelet[2203]: E0416 02:08:01.841050 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:08:01.841641 kubelet[2203]: E0416 02:08:01.841138 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:01.886169 kubelet[2203]: E0416 02:08:01.885825 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:08:01.887804 kubelet[2203]: E0416 02:08:01.886450 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:02.905908 kubelet[2203]: E0416 02:08:02.905505 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:08:02.905908 kubelet[2203]: E0416 02:08:02.905777 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:02.908673 kubelet[2203]: E0416 02:08:02.906748 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:08:02.908673 kubelet[2203]: E0416 02:08:02.907462 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:02.989146 kubelet[2203]: I0416 02:08:02.989105 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 02:08:03.660751 kubelet[2203]: E0416 02:08:03.660664 2203 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 02:08:03.733628 kubelet[2203]: I0416 02:08:03.733303 2203 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 16 02:08:03.746748 kubelet[2203]: I0416 02:08:03.739949 2203 apiserver.go:52] "Watching apiserver" Apr 16 02:08:03.832397 kubelet[2203]: I0416 02:08:03.832292 2203 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 02:08:03.835401 kubelet[2203]: I0416 02:08:03.833100 2203 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:08:03.851758 kubelet[2203]: E0416 02:08:03.851443 2203 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6b43d9985d8a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:07:59.705929895 +0000 UTC m=+0.530270047,LastTimestamp:2026-04-16 02:07:59.705929895 +0000 UTC m=+0.530270047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:08:03.905151 kubelet[2203]: E0416 02:08:03.904936 2203 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 16 02:08:03.905151 kubelet[2203]: I0416 02:08:03.905025 2203 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:08:03.909334 kubelet[2203]: E0416 02:08:03.907440 2203 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:08:03.909334 kubelet[2203]: I0416 02:08:03.907461 2203 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:08:03.911880 kubelet[2203]: E0416 02:08:03.910495 2203 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 16 02:08:03.922422 kubelet[2203]: I0416 02:08:03.922133 2203 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:08:03.930255 kubelet[2203]: E0416 02:08:03.929959 2203 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 16 02:08:03.930505 kubelet[2203]: E0416 02:08:03.930472 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:07.203263 systemd[1]: Reloading requested from client PID 2496 ('systemctl') (unit session-9.scope)... Apr 16 02:08:07.203337 systemd[1]: Reloading... Apr 16 02:08:07.319685 zram_generator::config[2535]: No configuration found. Apr 16 02:08:07.455370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 02:08:07.583697 systemd[1]: Reloading finished in 379 ms. Apr 16 02:08:07.642817 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:08:07.665362 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 02:08:07.666040 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:08:07.666233 systemd[1]: kubelet.service: Consumed 2.237s CPU time, 129.6M memory peak, 0B memory swap peak. Apr 16 02:08:07.680873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:08:07.975901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:08:07.982486 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 02:08:08.107440 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 02:08:08.150255 kubelet[2579]: I0416 02:08:08.149893 2579 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 02:08:08.150255 kubelet[2579]: I0416 02:08:08.149987 2579 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 02:08:08.150255 kubelet[2579]: I0416 02:08:08.150071 2579 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 02:08:08.150255 kubelet[2579]: I0416 02:08:08.150076 2579 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 02:08:08.150255 kubelet[2579]: I0416 02:08:08.150316 2579 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 02:08:08.154499 kubelet[2579]: I0416 02:08:08.154179 2579 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 02:08:08.178054 kubelet[2579]: I0416 02:08:08.173963 2579 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 02:08:08.270056 kubelet[2579]: E0416 02:08:08.269207 2579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 02:08:08.271263 kubelet[2579]: I0416 02:08:08.271249 2579 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 02:08:08.277211 kubelet[2579]: I0416 02:08:08.277194 2579 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 02:08:08.278660 kubelet[2579]: I0416 02:08:08.278513 2579 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 02:08:08.279363 kubelet[2579]: I0416 02:08:08.278883 2579 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 02:08:08.279835 kubelet[2579]: I0416 02:08:08.279823 2579 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 02:08:08.279883 kubelet[2579]: I0416 02:08:08.279878 2579 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 02:08:08.279944 kubelet[2579]: I0416 02:08:08.279937 2579 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 02:08:08.280465 kubelet[2579]: I0416 02:08:08.280452 2579 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 02:08:08.280997 kubelet[2579]: I0416 02:08:08.280988 2579 kubelet.go:482] "Attempting to sync node with API server" Apr 16 02:08:08.281051 kubelet[2579]: I0416 02:08:08.281046 2579 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 02:08:08.283073 kubelet[2579]: I0416 02:08:08.283062 2579 kubelet.go:394] "Adding apiserver pod source" Apr 16 02:08:08.283316 kubelet[2579]: I0416 02:08:08.283308 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 02:08:08.298347 kubelet[2579]: I0416 02:08:08.298107 2579 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 02:08:08.308836 kubelet[2579]: I0416 02:08:08.306188 2579 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 02:08:08.308836 kubelet[2579]: I0416 02:08:08.306275 2579 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 02:08:08.317660 kubelet[2579]: I0416 02:08:08.316306 2579 server.go:1257] "Started kubelet" Apr 16 02:08:08.324180 kubelet[2579]: I0416 02:08:08.323041 2579 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 02:08:08.326821 kubelet[2579]: I0416 02:08:08.320443 2579 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 02:08:08.344647 kubelet[2579]: I0416 02:08:08.343812 2579 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 02:08:08.344647 kubelet[2579]: I0416 02:08:08.344352 2579 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 02:08:08.347239 kubelet[2579]: I0416 02:08:08.345379 2579 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 02:08:08.353779 kubelet[2579]: I0416 02:08:08.353454 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 02:08:08.361007 kubelet[2579]: I0416 02:08:08.360980 2579 server.go:317] "Adding debug handlers to kubelet server" Apr 16 02:08:08.363850 kubelet[2579]: I0416 02:08:08.363627 2579 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 02:08:08.366775 kubelet[2579]: E0416 02:08:08.366301 2579 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:08:08.367164 kubelet[2579]: I0416 02:08:08.367084 2579 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 02:08:08.372632 kubelet[2579]: I0416 02:08:08.370837 2579 reconciler.go:29] "Reconciler: start to sync state" Apr 16 02:08:08.383465 kubelet[2579]: I0416 02:08:08.383366 2579 factory.go:223] Registration of the systemd container factory successfully Apr 16 02:08:08.385931 kubelet[2579]: I0416 02:08:08.383496 2579 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 02:08:08.424393 kubelet[2579]: I0416 02:08:08.424205 2579 factory.go:223] Registration of the containerd container factory successfully Apr 16 02:08:08.447925 kubelet[2579]: I0416 02:08:08.447433 2579 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 02:08:08.451961 sudo[2608]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 16 02:08:08.452360 sudo[2608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 16 02:08:08.456794 kubelet[2579]: I0416 02:08:08.455409 2579 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 02:08:08.456794 kubelet[2579]: I0416 02:08:08.455905 2579 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 02:08:08.456794 kubelet[2579]: I0416 02:08:08.455964 2579 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 02:08:08.456794 kubelet[2579]: E0416 02:08:08.456061 2579 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 02:08:08.556817 kubelet[2579]: E0416 02:08:08.556455 2579 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 02:08:08.738923 kubelet[2579]: I0416 02:08:08.738697 2579 cpu_manager.go:225] "Starting" policy="none" Apr 16 02:08:08.738923 kubelet[2579]: I0416 02:08:08.738881 2579 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 02:08:08.738923 kubelet[2579]: I0416 02:08:08.738906 2579 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 02:08:08.739199 kubelet[2579]: I0416 02:08:08.739087 2579 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 16 02:08:08.739199 kubelet[2579]: I0416 02:08:08.739096 2579 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 16 02:08:08.739199 kubelet[2579]: I0416 02:08:08.739139 2579 policy_none.go:50] "Start" Apr 16 02:08:08.739199 kubelet[2579]: I0416 02:08:08.739147 2579 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 02:08:08.739199 kubelet[2579]: I0416 02:08:08.739155 2579 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 02:08:08.739334 kubelet[2579]: I0416 02:08:08.739288 2579 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 02:08:08.739479 kubelet[2579]: I0416 02:08:08.739365 2579 policy_none.go:44] "Start" Apr 16 02:08:08.758492 kubelet[2579]: E0416 02:08:08.758316 2579 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 02:08:08.774841 kubelet[2579]: E0416 02:08:08.774433 2579 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 02:08:08.798677 kubelet[2579]: I0416 02:08:08.798185 2579 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 02:08:08.802971 kubelet[2579]: I0416 02:08:08.802922 2579 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 02:08:08.804016 kubelet[2579]: I0416 02:08:08.803957 2579 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 02:08:08.823796 kubelet[2579]: E0416 02:08:08.821984 2579 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 02:08:08.974994 kubelet[2579]: I0416 02:08:08.974459 2579 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 02:08:09.004678 kubelet[2579]: I0416 02:08:09.004388 2579 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 16 02:08:09.004678 kubelet[2579]: I0416 02:08:09.004864 2579 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 16 02:08:09.164051 kubelet[2579]: I0416 02:08:09.162251 2579 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:08:09.164051 kubelet[2579]: I0416 02:08:09.163082 2579 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:08:09.188120 kubelet[2579]: I0416 02:08:09.186985 2579 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:08:09.203359 kubelet[2579]: I0416 02:08:09.202092 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:08:09.203359 kubelet[2579]: I0416 02:08:09.202170 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:08:09.203359 kubelet[2579]: I0416 02:08:09.202184 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d42913f7f16752bf7aea774413726d2e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d42913f7f16752bf7aea774413726d2e\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:08:09.203359 kubelet[2579]: I0416 02:08:09.202198 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:08:09.214485 kubelet[2579]: I0416 02:08:09.214342 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:08:09.215870 kubelet[2579]: I0416 02:08:09.214625 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:08:09.215870 kubelet[2579]: I0416 02:08:09.214645 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:08:09.215870 kubelet[2579]: I0416 02:08:09.214666 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:08:09.215870 kubelet[2579]: I0416 02:08:09.214683 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 16 02:08:09.322806 kubelet[2579]: I0416 02:08:09.322384 2579 apiserver.go:52] "Watching apiserver" Apr 16 02:08:09.369634 kubelet[2579]: I0416 02:08:09.369210 2579 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 02:08:09.438140 sudo[2608]: pam_unix(sudo:session): session closed for user root Apr 16 02:08:09.516340 kubelet[2579]: E0416 02:08:09.515767 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:09.522844 kubelet[2579]: E0416 02:08:09.522087 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:09.531274 kubelet[2579]: E0416 02:08:09.531040 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:09.638357 kubelet[2579]: E0416 02:08:09.638211 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:09.638357 kubelet[2579]: E0416 02:08:09.638211 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:09.647244 kubelet[2579]: E0416 02:08:09.646064 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:09.680986 kubelet[2579]: I0416 02:08:09.680376 2579 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.680293213 podStartE2EDuration="680.293213ms" podCreationTimestamp="2026-04-16 02:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:08:09.625922069 +0000 UTC m=+1.635693122" watchObservedRunningTime="2026-04-16 02:08:09.680293213 +0000 UTC m=+1.690064258" Apr 16 02:08:09.680986 kubelet[2579]: I0416 02:08:09.680509 2579 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.680505022 podStartE2EDuration="680.505022ms" podCreationTimestamp="2026-04-16 02:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:08:09.680036226 +0000 UTC m=+1.689807269" watchObservedRunningTime="2026-04-16 02:08:09.680505022 +0000 UTC m=+1.690276077" Apr 16 02:08:09.832905 kubelet[2579]: I0416 02:08:09.832424 2579 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.832411956 podStartE2EDuration="832.411956ms" podCreationTimestamp="2026-04-16 02:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:08:09.739660059 +0000 UTC m=+1.749431126" watchObservedRunningTime="2026-04-16 02:08:09.832411956 +0000 UTC m=+1.842183012" Apr 16 02:08:10.646167 kubelet[2579]: E0416 02:08:10.645874 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:10.647286 kubelet[2579]: E0416 02:08:10.646412 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:11.673081 kubelet[2579]: E0416 02:08:11.672812 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:11.914092 sudo[1654]: pam_unix(sudo:session): session closed for user root Apr 16 02:08:11.920838 sshd[1651]: pam_unix(sshd:session): session closed for user core Apr 16 02:08:11.927444 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:53664.service: Deactivated successfully. Apr 16 02:08:11.930801 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 02:08:11.931005 systemd[1]: session-9.scope: Consumed 9.544s CPU time, 163.6M memory peak, 0B memory swap peak. Apr 16 02:08:11.934871 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Apr 16 02:08:11.938407 systemd-logind[1442]: Removed session 9. Apr 16 02:08:12.874706 kubelet[2579]: I0416 02:08:12.874347 2579 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 02:08:12.875870 containerd[1463]: time="2026-04-16T02:08:12.875784455Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 02:08:12.876822 kubelet[2579]: I0416 02:08:12.876278 2579 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 02:08:13.717466 systemd[1]: Created slice kubepods-besteffort-pod049049b4_9511_4cfd_9540_6b54c0ae29c4.slice - libcontainer container kubepods-besteffort-pod049049b4_9511_4cfd_9540_6b54c0ae29c4.slice. Apr 16 02:08:13.763817 kubelet[2579]: I0416 02:08:13.762432 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/049049b4-9511-4cfd-9540-6b54c0ae29c4-kube-proxy\") pod \"kube-proxy-j8b4f\" (UID: \"049049b4-9511-4cfd-9540-6b54c0ae29c4\") " pod="kube-system/kube-proxy-j8b4f" Apr 16 02:08:13.763817 kubelet[2579]: I0416 02:08:13.762677 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049049b4-9511-4cfd-9540-6b54c0ae29c4-xtables-lock\") pod \"kube-proxy-j8b4f\" (UID: \"049049b4-9511-4cfd-9540-6b54c0ae29c4\") " pod="kube-system/kube-proxy-j8b4f" Apr 16 02:08:13.763817 kubelet[2579]: I0416 02:08:13.762695 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049049b4-9511-4cfd-9540-6b54c0ae29c4-lib-modules\") pod \"kube-proxy-j8b4f\" (UID: \"049049b4-9511-4cfd-9540-6b54c0ae29c4\") " pod="kube-system/kube-proxy-j8b4f" Apr 16 02:08:13.763817 kubelet[2579]: I0416 02:08:13.762711 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9xv\" (UniqueName: \"kubernetes.io/projected/049049b4-9511-4cfd-9540-6b54c0ae29c4-kube-api-access-gx9xv\") pod \"kube-proxy-j8b4f\" (UID: \"049049b4-9511-4cfd-9540-6b54c0ae29c4\") " pod="kube-system/kube-proxy-j8b4f" Apr 16 02:08:13.837224 systemd[1]: Created slice kubepods-burstable-pod3d917cf3_4394_48cb_a90e_a40e12c6e709.slice - libcontainer container kubepods-burstable-pod3d917cf3_4394_48cb_a90e_a40e12c6e709.slice. Apr 16 02:08:13.868665 kubelet[2579]: I0416 02:08:13.866239 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-cgroup\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.868665 kubelet[2579]: I0416 02:08:13.866267 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-etc-cni-netd\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.868665 kubelet[2579]: I0416 02:08:13.866279 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-lib-modules\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.868665 kubelet[2579]: I0416 02:08:13.866326 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-run\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.868665 kubelet[2579]: I0416 02:08:13.866350 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-xtables-lock\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.868665 kubelet[2579]: I0416 02:08:13.866363 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d917cf3-4394-48cb-a90e-a40e12c6e709-clustermesh-secrets\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.869080 kubelet[2579]: I0416 02:08:13.866376 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-net\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.869080 kubelet[2579]: I0416 02:08:13.866484 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-hubble-tls\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.869080 kubelet[2579]: I0416 02:08:13.866496 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57cm\" (UniqueName: \"kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-kube-api-access-v57cm\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.869080 kubelet[2579]: I0416 02:08:13.866509 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-bpf-maps\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.869080 kubelet[2579]: I0416 02:08:13.866899 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-hostproc\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.869080 kubelet[2579]: I0416 02:08:13.866911 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cni-path\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.869174 kubelet[2579]: I0416 02:08:13.866996 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-config-path\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:13.869174 kubelet[2579]: I0416 02:08:13.867009 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-kernel\") pod \"cilium-b5rcs\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " pod="kube-system/cilium-b5rcs" Apr 16 02:08:14.049693 kubelet[2579]: E0416 02:08:14.048223 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:14.059174 systemd[1]: Created slice kubepods-besteffort-podc2bbf29f_f4dc_4f9c_b793_e58b0fe596d6.slice - libcontainer container kubepods-besteffort-podc2bbf29f_f4dc_4f9c_b793_e58b0fe596d6.slice. Apr 16 02:08:14.070194 containerd[1463]: time="2026-04-16T02:08:14.070081386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8b4f,Uid:049049b4-9511-4cfd-9540-6b54c0ae29c4,Namespace:kube-system,Attempt:0,}" Apr 16 02:08:14.160722 containerd[1463]: time="2026-04-16T02:08:14.159921560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 02:08:14.160722 containerd[1463]: time="2026-04-16T02:08:14.160091828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 02:08:14.160722 containerd[1463]: time="2026-04-16T02:08:14.160100613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:14.160722 containerd[1463]: time="2026-04-16T02:08:14.160176185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:14.235046 kubelet[2579]: I0416 02:08:14.190502 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-cilium-config-path\") pod \"cilium-operator-78cf5644cb-9nsb2\" (UID: \"c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6\") " pod="kube-system/cilium-operator-78cf5644cb-9nsb2" Apr 16 02:08:14.235046 kubelet[2579]: I0416 02:08:14.190818 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5jkz\" (UniqueName: \"kubernetes.io/projected/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-kube-api-access-k5jkz\") pod \"cilium-operator-78cf5644cb-9nsb2\" (UID: \"c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6\") " pod="kube-system/cilium-operator-78cf5644cb-9nsb2" Apr 16 02:08:14.241863 kubelet[2579]: E0416 02:08:14.241607 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:14.265132 containerd[1463]: time="2026-04-16T02:08:14.258068500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b5rcs,Uid:3d917cf3-4394-48cb-a90e-a40e12c6e709,Namespace:kube-system,Attempt:0,}" Apr 16 02:08:14.295036 systemd[1]: Started cri-containerd-8327e0ef49339fbd5ffe2af50fccbd12c10c8b4f85d819e4e7491062b0d50e77.scope - libcontainer container 8327e0ef49339fbd5ffe2af50fccbd12c10c8b4f85d819e4e7491062b0d50e77. Apr 16 02:08:14.391436 kubelet[2579]: E0416 02:08:14.389419 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:14.405783 containerd[1463]: time="2026-04-16T02:08:14.405096796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8b4f,Uid:049049b4-9511-4cfd-9540-6b54c0ae29c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8327e0ef49339fbd5ffe2af50fccbd12c10c8b4f85d819e4e7491062b0d50e77\"" Apr 16 02:08:14.411787 containerd[1463]: time="2026-04-16T02:08:14.411290265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-9nsb2,Uid:c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6,Namespace:kube-system,Attempt:0,}" Apr 16 02:08:14.412035 kubelet[2579]: E0416 02:08:14.411912 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:14.437279 containerd[1463]: time="2026-04-16T02:08:14.430967596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 02:08:14.437279 containerd[1463]: time="2026-04-16T02:08:14.431039747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 02:08:14.437279 containerd[1463]: time="2026-04-16T02:08:14.431055264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:14.437279 containerd[1463]: time="2026-04-16T02:08:14.431953372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:14.438775 containerd[1463]: time="2026-04-16T02:08:14.438350070Z" level=info msg="CreateContainer within sandbox \"8327e0ef49339fbd5ffe2af50fccbd12c10c8b4f85d819e4e7491062b0d50e77\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 02:08:14.489171 systemd[1]: Started cri-containerd-61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217.scope - libcontainer container 61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217. Apr 16 02:08:14.519474 containerd[1463]: time="2026-04-16T02:08:14.519370085Z" level=info msg="CreateContainer within sandbox \"8327e0ef49339fbd5ffe2af50fccbd12c10c8b4f85d819e4e7491062b0d50e77\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bffa7a54332aa9fa14304827881adc4bf27b0f113d2632d4d2f940cf234a647e\"" Apr 16 02:08:14.540622 containerd[1463]: time="2026-04-16T02:08:14.540430747Z" level=info msg="StartContainer for \"bffa7a54332aa9fa14304827881adc4bf27b0f113d2632d4d2f940cf234a647e\"" Apr 16 02:08:14.547922 containerd[1463]: time="2026-04-16T02:08:14.545103204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 02:08:14.547922 containerd[1463]: time="2026-04-16T02:08:14.545177392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 02:08:14.547922 containerd[1463]: time="2026-04-16T02:08:14.545195190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:14.547922 containerd[1463]: time="2026-04-16T02:08:14.545280215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:08:14.589966 systemd[1]: Started cri-containerd-dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b.scope - libcontainer container dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b. Apr 16 02:08:14.605967 containerd[1463]: time="2026-04-16T02:08:14.605372342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b5rcs,Uid:3d917cf3-4394-48cb-a90e-a40e12c6e709,Namespace:kube-system,Attempt:0,} returns sandbox id \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\"" Apr 16 02:08:14.625551 kubelet[2579]: E0416 02:08:14.625435 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:14.646946 containerd[1463]: time="2026-04-16T02:08:14.642452036Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 16 02:08:14.696084 systemd[1]: Started cri-containerd-bffa7a54332aa9fa14304827881adc4bf27b0f113d2632d4d2f940cf234a647e.scope - libcontainer container bffa7a54332aa9fa14304827881adc4bf27b0f113d2632d4d2f940cf234a647e. Apr 16 02:08:14.705060 containerd[1463]: time="2026-04-16T02:08:14.704960375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-9nsb2,Uid:c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\"" Apr 16 02:08:14.709129 kubelet[2579]: E0416 02:08:14.709083 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:14.845475 containerd[1463]: time="2026-04-16T02:08:14.845073841Z" level=info msg="StartContainer for \"bffa7a54332aa9fa14304827881adc4bf27b0f113d2632d4d2f940cf234a647e\" returns successfully" Apr 16 02:08:14.866098 kubelet[2579]: E0416 02:08:14.866040 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:15.915045 kubelet[2579]: E0416 02:08:15.914857 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:16.847039 kubelet[2579]: E0416 02:08:16.846147 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:16.885686 kubelet[2579]: I0416 02:08:16.884970 2579 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-j8b4f" podStartSLOduration=3.884957643 podStartE2EDuration="3.884957643s" podCreationTimestamp="2026-04-16 02:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:08:14.91050042 +0000 UTC m=+6.920271468" watchObservedRunningTime="2026-04-16 02:08:16.884957643 +0000 UTC m=+8.894728698" Apr 16 02:08:17.584809 kubelet[2579]: E0416 02:08:17.584285 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:21.212174 kubelet[2579]: E0416 02:08:21.211702 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:26.903974 kubelet[2579]: E0416 02:08:26.902183 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:27.730349 kubelet[2579]: E0416 02:08:27.730262 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:08:33.796330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1557073726.mount: Deactivated successfully. Apr 16 02:08:47.887049 kubelet[2579]: E0416 02:08:47.882481 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.416s" Apr 16 02:09:01.858045 containerd[1463]: time="2026-04-16T02:09:01.855332682Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 16 02:09:01.860199 containerd[1463]: time="2026-04-16T02:09:01.855894266Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:09:02.208499 containerd[1463]: time="2026-04-16T02:09:02.207266060Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:09:02.402078 containerd[1463]: time="2026-04-16T02:09:02.401914091Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 47.756962093s" Apr 16 02:09:02.402078 containerd[1463]: time="2026-04-16T02:09:02.402045638Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 16 02:09:02.436459 containerd[1463]: time="2026-04-16T02:09:02.436373769Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 16 02:09:02.483150 containerd[1463]: time="2026-04-16T02:09:02.482262729Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 02:09:02.805411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3603243728.mount: Deactivated successfully. Apr 16 02:09:02.842937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471646702.mount: Deactivated successfully. Apr 16 02:09:03.549945 containerd[1463]: time="2026-04-16T02:09:03.541469752Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\"" Apr 16 02:09:03.791976 containerd[1463]: time="2026-04-16T02:09:03.791336455Z" level=info msg="StartContainer for \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\"" Apr 16 02:09:04.516420 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:40014.service - OpenSSH per-connection server daemon (10.0.0.1:40014). Apr 16 02:09:05.741460 systemd[1]: Started cri-containerd-b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545.scope - libcontainer container b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545. Apr 16 02:09:05.832151 sshd[3003]: Accepted publickey for core from 10.0.0.1 port 40014 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:09:05.941047 sshd[3003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:06.063219 systemd-logind[1442]: New session 10 of user core. Apr 16 02:09:06.071334 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 02:09:06.085222 kubelet[2579]: E0416 02:09:06.085026 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.601s" Apr 16 02:09:07.223325 systemd[1]: cri-containerd-b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545.scope: Deactivated successfully. Apr 16 02:09:07.375829 containerd[1463]: time="2026-04-16T02:09:07.310331066Z" level=info msg="StartContainer for \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\" returns successfully" Apr 16 02:09:07.798133 kubelet[2579]: E0416 02:09:07.793407 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:08.497464 sshd[3003]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:08.532262 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:40014.service: Deactivated successfully. Apr 16 02:09:08.551481 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 02:09:08.554305 systemd[1]: session-10.scope: Consumed 1.498s CPU time. Apr 16 02:09:08.583327 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Apr 16 02:09:08.599966 systemd-logind[1442]: Removed session 10. Apr 16 02:09:09.324000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545-rootfs.mount: Deactivated successfully. Apr 16 02:09:09.385804 kubelet[2579]: E0416 02:09:09.381346 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:09.625037 containerd[1463]: time="2026-04-16T02:09:09.616991870Z" level=info msg="shim disconnected" id=b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545 namespace=k8s.io Apr 16 02:09:09.635321 containerd[1463]: time="2026-04-16T02:09:09.633342297Z" level=warning msg="cleaning up after shim disconnected" id=b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545 namespace=k8s.io Apr 16 02:09:09.647223 containerd[1463]: time="2026-04-16T02:09:09.637457887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:09:10.623945 containerd[1463]: time="2026-04-16T02:09:10.590423564Z" level=warning msg="cleanup warnings time=\"2026-04-16T02:09:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 16 02:09:11.379436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729135467.mount: Deactivated successfully. Apr 16 02:09:13.225099 kubelet[2579]: E0416 02:09:13.225008 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:13.637457 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:41626.service - OpenSSH per-connection server daemon (10.0.0.1:41626). Apr 16 02:09:13.698005 containerd[1463]: time="2026-04-16T02:09:13.697437422Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 02:09:13.945341 sshd[3081]: Accepted publickey for core from 10.0.0.1 port 41626 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:09:13.952002 sshd[3081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:14.002058 systemd-logind[1442]: New session 11 of user core. Apr 16 02:09:14.032789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3391843944.mount: Deactivated successfully. Apr 16 02:09:14.044321 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 02:09:14.177198 containerd[1463]: time="2026-04-16T02:09:14.176854492Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\"" Apr 16 02:09:14.228831 containerd[1463]: time="2026-04-16T02:09:14.225943865Z" level=info msg="StartContainer for \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\"" Apr 16 02:09:14.615099 systemd[1]: Started cri-containerd-437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a.scope - libcontainer container 437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a. Apr 16 02:09:15.296812 sshd[3081]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:15.309965 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:41626.service: Deactivated successfully. Apr 16 02:09:15.344245 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 02:09:15.403696 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Apr 16 02:09:15.412972 systemd-logind[1442]: Removed session 11. Apr 16 02:09:15.460671 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 02:09:15.460967 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:09:15.461050 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:09:15.498904 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:09:15.502092 systemd[1]: cri-containerd-437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a.scope: Deactivated successfully. Apr 16 02:09:15.537503 containerd[1463]: time="2026-04-16T02:09:15.503395245Z" level=info msg="StartContainer for \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\" returns successfully" Apr 16 02:09:15.709885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:09:16.057081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a-rootfs.mount: Deactivated successfully. Apr 16 02:09:16.147935 containerd[1463]: time="2026-04-16T02:09:16.139119116Z" level=info msg="shim disconnected" id=437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a namespace=k8s.io Apr 16 02:09:16.160889 containerd[1463]: time="2026-04-16T02:09:16.147144049Z" level=warning msg="cleaning up after shim disconnected" id=437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a namespace=k8s.io Apr 16 02:09:16.166104 containerd[1463]: time="2026-04-16T02:09:16.166000537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:09:16.688390 kubelet[2579]: E0416 02:09:16.687924 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:17.151796 containerd[1463]: time="2026-04-16T02:09:17.151223373Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 02:09:17.450297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490638716.mount: Deactivated successfully. Apr 16 02:09:17.490556 containerd[1463]: time="2026-04-16T02:09:17.490304448Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\"" Apr 16 02:09:17.500783 containerd[1463]: time="2026-04-16T02:09:17.499835861Z" level=info msg="StartContainer for \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\"" Apr 16 02:09:17.920217 systemd[1]: Started cri-containerd-61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de.scope - libcontainer container 61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de. Apr 16 02:09:18.246334 containerd[1463]: time="2026-04-16T02:09:18.245022461Z" level=info msg="StartContainer for \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\" returns successfully" Apr 16 02:09:18.318901 systemd[1]: cri-containerd-61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de.scope: Deactivated successfully. Apr 16 02:09:19.088298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de-rootfs.mount: Deactivated successfully. Apr 16 02:09:19.224486 containerd[1463]: time="2026-04-16T02:09:19.218270967Z" level=info msg="shim disconnected" id=61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de namespace=k8s.io Apr 16 02:09:19.254690 containerd[1463]: time="2026-04-16T02:09:19.249878425Z" level=warning msg="cleaning up after shim disconnected" id=61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de namespace=k8s.io Apr 16 02:09:19.288831 containerd[1463]: time="2026-04-16T02:09:19.254813725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:09:20.240849 kubelet[2579]: E0416 02:09:20.239417 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:20.475940 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:40176.service - OpenSSH per-connection server daemon (10.0.0.1:40176). Apr 16 02:09:20.806971 sshd[3230]: Accepted publickey for core from 10.0.0.1 port 40176 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:09:20.812371 sshd[3230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:20.857188 systemd-logind[1442]: New session 12 of user core. Apr 16 02:09:20.861831 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 02:09:21.521272 containerd[1463]: time="2026-04-16T02:09:21.491388850Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 02:09:23.489432 sshd[3230]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:23.523218 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:40176.service: Deactivated successfully. Apr 16 02:09:23.608270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1947635038.mount: Deactivated successfully. Apr 16 02:09:23.624154 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 02:09:23.624832 systemd[1]: session-12.scope: Consumed 1.681s CPU time. Apr 16 02:09:23.627408 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Apr 16 02:09:23.639838 systemd-logind[1442]: Removed session 12. Apr 16 02:09:23.936995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933125390.mount: Deactivated successfully. Apr 16 02:09:24.428252 containerd[1463]: time="2026-04-16T02:09:24.425845466Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\"" Apr 16 02:09:24.490080 containerd[1463]: time="2026-04-16T02:09:24.489943982Z" level=info msg="StartContainer for \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\"" Apr 16 02:09:25.312056 systemd[1]: Started cri-containerd-e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d.scope - libcontainer container e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d. Apr 16 02:09:26.165486 systemd[1]: cri-containerd-e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d.scope: Deactivated successfully. Apr 16 02:09:26.224179 containerd[1463]: time="2026-04-16T02:09:26.213306912Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d917cf3_4394_48cb_a90e_a40e12c6e709.slice/cri-containerd-e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d.scope/memory.events\": no such file or directory" Apr 16 02:09:26.306888 containerd[1463]: time="2026-04-16T02:09:26.306383338Z" level=info msg="StartContainer for \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\" returns successfully" Apr 16 02:09:27.241208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d-rootfs.mount: Deactivated successfully. Apr 16 02:09:27.500415 containerd[1463]: time="2026-04-16T02:09:27.496391650Z" level=info msg="shim disconnected" id=e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d namespace=k8s.io Apr 16 02:09:27.513378 containerd[1463]: time="2026-04-16T02:09:27.501379152Z" level=warning msg="cleaning up after shim disconnected" id=e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d namespace=k8s.io Apr 16 02:09:27.525132 containerd[1463]: time="2026-04-16T02:09:27.507461993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:09:27.955238 containerd[1463]: time="2026-04-16T02:09:27.954938915Z" level=warning msg="cleanup warnings time=\"2026-04-16T02:09:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 16 02:09:28.648931 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:40190.service - OpenSSH per-connection server daemon (10.0.0.1:40190). Apr 16 02:09:28.906048 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 40190 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:09:28.914958 sshd[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:29.072431 systemd-logind[1442]: New session 13 of user core. Apr 16 02:09:29.108270 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 02:09:29.529322 kubelet[2579]: E0416 02:09:29.523507 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:31.860884 containerd[1463]: time="2026-04-16T02:09:31.836251357Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 16 02:09:32.053868 containerd[1463]: time="2026-04-16T02:09:32.022267012Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:09:32.109510 sshd[3302]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:32.166316 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:40190.service: Deactivated successfully. Apr 16 02:09:32.245266 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 02:09:32.250344 systemd[1]: session-13.scope: Consumed 1.851s CPU time. Apr 16 02:09:32.325148 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Apr 16 02:09:32.411164 systemd-logind[1442]: Removed session 13. Apr 16 02:09:32.465869 containerd[1463]: time="2026-04-16T02:09:32.462428761Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 02:09:32.852011 containerd[1463]: time="2026-04-16T02:09:32.851469617Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:09:33.033486 kubelet[2579]: E0416 02:09:33.031018 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.405s" Apr 16 02:09:33.927948 containerd[1463]: time="2026-04-16T02:09:33.924500021Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 31.485207527s" Apr 16 02:09:33.932862 containerd[1463]: time="2026-04-16T02:09:33.931501106Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 16 02:09:34.060161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793759714.mount: Deactivated successfully. Apr 16 02:09:34.197828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383499457.mount: Deactivated successfully. Apr 16 02:09:34.381457 containerd[1463]: time="2026-04-16T02:09:34.381125603Z" level=info msg="CreateContainer within sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\"" Apr 16 02:09:34.391431 containerd[1463]: time="2026-04-16T02:09:34.390682494Z" level=info msg="CreateContainer within sandbox \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 16 02:09:34.718314 containerd[1463]: time="2026-04-16T02:09:34.717859781Z" level=info msg="StartContainer for \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\"" Apr 16 02:09:35.232875 kubelet[2579]: E0416 02:09:35.229514 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:35.346510 systemd[1]: run-containerd-runc-k8s.io-631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440-runc.tSO2pR.mount: Deactivated successfully. Apr 16 02:09:35.520991 systemd[1]: Started cri-containerd-631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440.scope - libcontainer container 631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440. Apr 16 02:09:36.404164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179890284.mount: Deactivated successfully. Apr 16 02:09:36.710079 containerd[1463]: time="2026-04-16T02:09:36.661208537Z" level=info msg="StartContainer for \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\" returns successfully" Apr 16 02:09:37.307821 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:37852.service - OpenSSH per-connection server daemon (10.0.0.1:37852). Apr 16 02:09:37.755292 sshd[3359]: Accepted publickey for core from 10.0.0.1 port 37852 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:09:37.761428 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:37.939492 systemd-logind[1442]: New session 14 of user core. Apr 16 02:09:37.991293 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 02:09:38.455763 kubelet[2579]: E0416 02:09:38.450279 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.903s" Apr 16 02:09:40.289891 containerd[1463]: time="2026-04-16T02:09:40.286271446Z" level=info msg="CreateContainer within sandbox \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\"" Apr 16 02:09:40.294512 kubelet[2579]: E0416 02:09:40.291498 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.827s" Apr 16 02:09:40.692381 kubelet[2579]: E0416 02:09:40.690438 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:40.820921 kubelet[2579]: E0416 02:09:40.814404 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:40.820422 sshd[3359]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:40.849831 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:37852.service: Deactivated successfully. Apr 16 02:09:40.893968 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 02:09:40.894225 systemd[1]: session-14.scope: Consumed 1.544s CPU time. Apr 16 02:09:40.899114 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Apr 16 02:09:40.909153 systemd-logind[1442]: Removed session 14. Apr 16 02:09:41.131167 containerd[1463]: time="2026-04-16T02:09:41.117312824Z" level=info msg="StartContainer for \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\"" Apr 16 02:09:42.323589 kubelet[2579]: E0416 02:09:42.323154 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.832s" Apr 16 02:09:42.724409 systemd[1]: Started cri-containerd-e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5.scope - libcontainer container e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5. Apr 16 02:09:44.785822 kubelet[2579]: E0416 02:09:44.666475 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.182s" Apr 16 02:09:45.401291 containerd[1463]: time="2026-04-16T02:09:45.360272388Z" level=info msg="StartContainer for \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\" returns successfully" Apr 16 02:09:45.933175 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:47920.service - OpenSSH per-connection server daemon (10.0.0.1:47920). Apr 16 02:09:46.230852 kubelet[2579]: E0416 02:09:46.227803 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.447s" Apr 16 02:09:46.319947 sshd[3456]: Accepted publickey for core from 10.0.0.1 port 47920 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:09:46.351265 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:46.549227 systemd-logind[1442]: New session 15 of user core. Apr 16 02:09:46.601908 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 02:09:47.863200 kubelet[2579]: E0416 02:09:47.858983 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.373s" Apr 16 02:09:48.751989 kubelet[2579]: I0416 02:09:48.747343 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz6gn\" (UniqueName: \"kubernetes.io/projected/5933f2cb-ae5a-47e4-91d4-0d8be9480079-kube-api-access-wz6gn\") pod \"coredns-7d764666f9-27d9x\" (UID: \"5933f2cb-ae5a-47e4-91d4-0d8be9480079\") " pod="kube-system/coredns-7d764666f9-27d9x" Apr 16 02:09:48.763374 sshd[3456]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:48.799727 kubelet[2579]: E0416 02:09:48.799462 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:48.813174 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:47920.service: Deactivated successfully. Apr 16 02:09:48.854326 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 02:09:48.857375 systemd[1]: session-15.scope: Consumed 1.202s CPU time. Apr 16 02:09:48.897236 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Apr 16 02:09:48.931365 kubelet[2579]: I0416 02:09:48.760967 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5933f2cb-ae5a-47e4-91d4-0d8be9480079-config-volume\") pod \"coredns-7d764666f9-27d9x\" (UID: \"5933f2cb-ae5a-47e4-91d4-0d8be9480079\") " pod="kube-system/coredns-7d764666f9-27d9x" Apr 16 02:09:48.948114 systemd-logind[1442]: Removed session 15. Apr 16 02:09:49.212303 kubelet[2579]: I0416 02:09:49.211094 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd22b9d8-1786-4923-96f3-3db07d47e21f-config-volume\") pod \"coredns-7d764666f9-ss8dh\" (UID: \"fd22b9d8-1786-4923-96f3-3db07d47e21f\") " pod="kube-system/coredns-7d764666f9-ss8dh" Apr 16 02:09:49.333459 kubelet[2579]: I0416 02:09:49.254383 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbw8\" (UniqueName: \"kubernetes.io/projected/fd22b9d8-1786-4923-96f3-3db07d47e21f-kube-api-access-5dbw8\") pod \"coredns-7d764666f9-ss8dh\" (UID: \"fd22b9d8-1786-4923-96f3-3db07d47e21f\") " pod="kube-system/coredns-7d764666f9-ss8dh" Apr 16 02:09:49.723268 kubelet[2579]: E0416 02:09:49.723069 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:49.807253 systemd[1]: Created slice kubepods-burstable-pod5933f2cb_ae5a_47e4_91d4_0d8be9480079.slice - libcontainer container kubepods-burstable-pod5933f2cb_ae5a_47e4_91d4_0d8be9480079.slice. Apr 16 02:09:50.023919 systemd[1]: Created slice kubepods-burstable-podfd22b9d8_1786_4923_96f3_3db07d47e21f.slice - libcontainer container kubepods-burstable-podfd22b9d8_1786_4923_96f3_3db07d47e21f.slice. Apr 16 02:09:50.301761 kubelet[2579]: E0416 02:09:50.298965 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:50.526378 kubelet[2579]: E0416 02:09:50.523424 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:50.624729 containerd[1463]: time="2026-04-16T02:09:50.611357843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ss8dh,Uid:fd22b9d8-1786-4923-96f3-3db07d47e21f,Namespace:kube-system,Attempt:0,}" Apr 16 02:09:50.712160 containerd[1463]: time="2026-04-16T02:09:50.712066613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-27d9x,Uid:5933f2cb-ae5a-47e4-91d4-0d8be9480079,Namespace:kube-system,Attempt:0,}" Apr 16 02:09:50.824879 kubelet[2579]: I0416 02:09:50.823133 2579 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-9nsb2" podStartSLOduration=18.509090097 podStartE2EDuration="1m37.823061678s" podCreationTimestamp="2026-04-16 02:08:13 +0000 UTC" firstStartedPulling="2026-04-16 02:08:14.712490208 +0000 UTC m=+6.722261253" lastFinishedPulling="2026-04-16 02:09:34.026461789 +0000 UTC m=+86.036232834" observedRunningTime="2026-04-16 02:09:50.791037626 +0000 UTC m=+102.800808676" watchObservedRunningTime="2026-04-16 02:09:50.823061678 +0000 UTC m=+102.832832733" Apr 16 02:09:51.515211 kubelet[2579]: E0416 02:09:51.514456 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.016s" Apr 16 02:09:52.885161 kubelet[2579]: E0416 02:09:52.874368 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:53.844484 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:60068.service - OpenSSH per-connection server daemon (10.0.0.1:60068). Apr 16 02:09:54.166406 sshd[3526]: Accepted publickey for core from 10.0.0.1 port 60068 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:09:54.218355 sshd[3526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:09:54.357501 kubelet[2579]: E0416 02:09:54.289895 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:54.341106 systemd-logind[1442]: New session 16 of user core. Apr 16 02:09:54.407300 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 02:09:56.274741 kubelet[2579]: E0416 02:09:56.263258 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:09:56.321771 kubelet[2579]: E0416 02:09:56.318061 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.777s" Apr 16 02:09:58.703309 sshd[3526]: pam_unix(sshd:session): session closed for user core Apr 16 02:09:58.731092 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:60068.service: Deactivated successfully. Apr 16 02:09:58.763397 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 02:09:58.839332 systemd[1]: session-16.scope: Consumed 2.559s CPU time. Apr 16 02:09:58.865148 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Apr 16 02:09:58.900000 systemd-logind[1442]: Removed session 16. Apr 16 02:09:59.456234 kubelet[2579]: E0416 02:09:59.451281 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.133s" Apr 16 02:09:59.712316 kubelet[2579]: E0416 02:09:59.698189 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:00.523905 kubelet[2579]: E0416 02:10:00.520476 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.038s" Apr 16 02:10:02.223781 kubelet[2579]: E0416 02:10:02.219513 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:03.922380 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:44422.service - OpenSSH per-connection server daemon (10.0.0.1:44422). Apr 16 02:10:04.406851 kubelet[2579]: E0416 02:10:04.406192 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.944s" Apr 16 02:10:04.629954 sshd[3558]: Accepted publickey for core from 10.0.0.1 port 44422 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:10:04.648226 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:10:04.740938 systemd-logind[1442]: New session 17 of user core. Apr 16 02:10:04.760915 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 02:10:07.757946 kubelet[2579]: E0416 02:10:07.753446 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.295s" Apr 16 02:10:10.611241 sshd[3558]: pam_unix(sshd:session): session closed for user core Apr 16 02:10:10.640954 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:44422.service: Deactivated successfully. Apr 16 02:10:10.744174 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 02:10:10.744992 systemd[1]: session-17.scope: Consumed 3.854s CPU time. Apr 16 02:10:10.768429 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Apr 16 02:10:10.798155 systemd-logind[1442]: Removed session 17. Apr 16 02:10:10.963278 kubelet[2579]: E0416 02:10:10.962244 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.502s" Apr 16 02:10:13.853070 kubelet[2579]: E0416 02:10:13.848398 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.345s" Apr 16 02:10:15.715692 kubelet[2579]: E0416 02:10:15.712208 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.177s" Apr 16 02:10:15.732274 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:56322.service - OpenSSH per-connection server daemon (10.0.0.1:56322). Apr 16 02:10:16.011693 sshd[3578]: Accepted publickey for core from 10.0.0.1 port 56322 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:10:16.018019 sshd[3578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:10:16.172691 systemd-logind[1442]: New session 18 of user core. Apr 16 02:10:16.247458 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 02:10:19.643412 kubelet[2579]: E0416 02:10:19.631036 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.165s" Apr 16 02:10:20.910042 kubelet[2579]: E0416 02:10:20.852426 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.213s" Apr 16 02:10:22.905685 kubelet[2579]: E0416 02:10:22.896501 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.858s" Apr 16 02:10:24.225726 sshd[3578]: pam_unix(sshd:session): session closed for user core Apr 16 02:10:24.341002 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:56322.service: Deactivated successfully. Apr 16 02:10:24.386211 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 02:10:24.387224 systemd[1]: session-18.scope: Consumed 5.062s CPU time. Apr 16 02:10:24.394498 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Apr 16 02:10:24.406349 systemd-logind[1442]: Removed session 18. Apr 16 02:10:29.321927 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:60990.service - OpenSSH per-connection server daemon (10.0.0.1:60990). Apr 16 02:10:29.707308 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 60990 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:10:29.743399 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:10:29.930069 systemd-logind[1442]: New session 19 of user core. Apr 16 02:10:30.015683 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 02:10:31.701439 sshd[3622]: pam_unix(sshd:session): session closed for user core Apr 16 02:10:31.821859 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:60990.service: Deactivated successfully. Apr 16 02:10:31.832272 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 02:10:31.834491 systemd[1]: session-19.scope: Consumed 1.295s CPU time. Apr 16 02:10:31.875397 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Apr 16 02:10:31.925398 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:53730.service - OpenSSH per-connection server daemon (10.0.0.1:53730). Apr 16 02:10:31.954238 systemd-logind[1442]: Removed session 19. Apr 16 02:10:32.206723 sshd[3639]: Accepted publickey for core from 10.0.0.1 port 53730 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:10:32.224840 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:10:32.266229 systemd-logind[1442]: New session 20 of user core. Apr 16 02:10:32.287843 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 02:10:35.373164 sshd[3639]: pam_unix(sshd:session): session closed for user core Apr 16 02:10:35.441196 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:53730.service: Deactivated successfully. Apr 16 02:10:35.517907 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 02:10:35.518342 systemd[1]: session-20.scope: Consumed 1.813s CPU time. Apr 16 02:10:35.544076 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Apr 16 02:10:35.588091 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:53744.service - OpenSSH per-connection server daemon (10.0.0.1:53744). Apr 16 02:10:35.639241 systemd-logind[1442]: Removed session 20. Apr 16 02:10:36.477420 sshd[3651]: Accepted publickey for core from 10.0.0.1 port 53744 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:10:36.567939 sshd[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:10:36.729066 systemd-logind[1442]: New session 21 of user core. Apr 16 02:10:36.770943 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 02:10:38.936211 systemd-networkd[1292]: cilium_host: Link UP Apr 16 02:10:38.940942 systemd-networkd[1292]: cilium_net: Link UP Apr 16 02:10:38.941212 systemd-networkd[1292]: cilium_net: Gained carrier Apr 16 02:10:38.950045 systemd-networkd[1292]: cilium_host: Gained carrier Apr 16 02:10:39.535390 systemd-networkd[1292]: cilium_net: Gained IPv6LL Apr 16 02:10:39.666421 sshd[3651]: pam_unix(sshd:session): session closed for user core Apr 16 02:10:39.817329 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:53744.service: Deactivated successfully. Apr 16 02:10:39.843492 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 02:10:39.844092 systemd[1]: session-21.scope: Consumed 1.965s CPU time. Apr 16 02:10:39.848339 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Apr 16 02:10:39.865236 systemd-logind[1442]: Removed session 21. Apr 16 02:10:40.002219 systemd-networkd[1292]: cilium_host: Gained IPv6LL Apr 16 02:10:41.600204 systemd-networkd[1292]: cilium_vxlan: Link UP Apr 16 02:10:41.604108 systemd-networkd[1292]: cilium_vxlan: Gained carrier Apr 16 02:10:43.122353 systemd-networkd[1292]: cilium_vxlan: Gained IPv6LL Apr 16 02:10:44.854245 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:37720.service - OpenSSH per-connection server daemon (10.0.0.1:37720). Apr 16 02:10:45.185346 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 37720 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:10:45.196176 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:10:45.338301 systemd-logind[1442]: New session 22 of user core. Apr 16 02:10:45.362315 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 02:10:47.562755 sshd[3758]: pam_unix(sshd:session): session closed for user core Apr 16 02:10:47.602322 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:37720.service: Deactivated successfully. Apr 16 02:10:47.629904 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 02:10:47.631479 systemd[1]: session-22.scope: Consumed 1.198s CPU time. Apr 16 02:10:47.691322 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Apr 16 02:10:47.729930 systemd-logind[1442]: Removed session 22. Apr 16 02:10:48.103725 kubelet[2579]: E0416 02:10:48.102248 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.546s" Apr 16 02:10:48.144774 kubelet[2579]: E0416 02:10:48.142940 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:50.534770 kubelet[2579]: E0416 02:10:50.531165 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:52.757444 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:60218.service - OpenSSH per-connection server daemon (10.0.0.1:60218). Apr 16 02:10:53.141130 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 60218 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:10:53.145059 sshd[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:10:53.241823 systemd-logind[1442]: New session 23 of user core. Apr 16 02:10:53.266665 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 02:10:54.628911 sshd[3776]: pam_unix(sshd:session): session closed for user core Apr 16 02:10:54.672707 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:60218.service: Deactivated successfully. Apr 16 02:10:54.799193 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 02:10:54.811500 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Apr 16 02:10:54.835952 systemd-logind[1442]: Removed session 23. Apr 16 02:10:55.626867 containerd[1463]: time="2026-04-16T02:10:55.607669623Z" level=error msg="Failed to destroy network for sandbox \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\"" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 16 02:10:55.733186 containerd[1463]: time="2026-04-16T02:10:55.724259861Z" level=error msg="encountered an error cleaning up failed sandbox \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 16 02:10:55.735400 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545-shm.mount: Deactivated successfully. Apr 16 02:10:55.737944 containerd[1463]: time="2026-04-16T02:10:55.736186987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-27d9x,Uid:5933f2cb-ae5a-47e4-91d4-0d8be9480079,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 16 02:10:55.777959 kubelet[2579]: E0416 02:10:55.777094 2579 log.go:32] "RunPodSandbox from runtime service failed" err=< Apr 16 02:10:55.777959 kubelet[2579]: rpc error: code = Unknown desc = failed to setup network for sandbox "5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 16 02:10:55.777959 kubelet[2579]: Is the agent running? Apr 16 02:10:55.777959 kubelet[2579]: > Apr 16 02:10:55.789746 kubelet[2579]: E0416 02:10:55.784928 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=< Apr 16 02:10:55.789746 kubelet[2579]: rpc error: code = Unknown desc = failed to setup network for sandbox "5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 16 02:10:55.789746 kubelet[2579]: Is the agent running? Apr 16 02:10:55.789746 kubelet[2579]: > pod="kube-system/coredns-7d764666f9-27d9x" Apr 16 02:10:55.789746 kubelet[2579]: E0416 02:10:55.786841 2579 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err=< Apr 16 02:10:55.789746 kubelet[2579]: rpc error: code = Unknown desc = failed to setup network for sandbox "5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 16 02:10:55.789746 kubelet[2579]: Is the agent running? Apr 16 02:10:55.789746 kubelet[2579]: > pod="kube-system/coredns-7d764666f9-27d9x" Apr 16 02:10:55.805328 kubelet[2579]: E0416 02:10:55.804853 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-27d9x_kube-system(5933f2cb-ae5a-47e4-91d4-0d8be9480079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-27d9x_kube-system(5933f2cb-ae5a-47e4-91d4-0d8be9480079)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:10:56.896178 containerd[1463]: time="2026-04-16T02:10:56.878726435Z" level=error msg="Failed to destroy network for sandbox \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\"" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 16 02:10:57.032923 containerd[1463]: time="2026-04-16T02:10:56.944900127Z" level=error msg="encountered an error cleaning up failed sandbox \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 16 02:10:57.035799 containerd[1463]: time="2026-04-16T02:10:57.035285060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ss8dh,Uid:fd22b9d8-1786-4923-96f3-3db07d47e21f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 16 02:10:57.052398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673-shm.mount: Deactivated successfully. Apr 16 02:10:57.055056 kubelet[2579]: E0416 02:10:57.054455 2579 log.go:32] "RunPodSandbox from runtime service failed" err=< Apr 16 02:10:57.055056 kubelet[2579]: rpc error: code = Unknown desc = failed to setup network for sandbox "320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 16 02:10:57.055056 kubelet[2579]: Is the agent running? Apr 16 02:10:57.055056 kubelet[2579]: > Apr 16 02:10:57.074680 kubelet[2579]: E0416 02:10:57.065283 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=< Apr 16 02:10:57.074680 kubelet[2579]: rpc error: code = Unknown desc = failed to setup network for sandbox "320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 16 02:10:57.074680 kubelet[2579]: Is the agent running? Apr 16 02:10:57.074680 kubelet[2579]: > pod="kube-system/coredns-7d764666f9-ss8dh" Apr 16 02:10:57.084352 kubelet[2579]: E0416 02:10:57.077423 2579 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err=< Apr 16 02:10:57.084352 kubelet[2579]: rpc error: code = Unknown desc = failed to setup network for sandbox "320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 16 02:10:57.084352 kubelet[2579]: Is the agent running? Apr 16 02:10:57.084352 kubelet[2579]: > pod="kube-system/coredns-7d764666f9-ss8dh" Apr 16 02:10:57.105741 kubelet[2579]: E0416 02:10:57.103681 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-ss8dh_kube-system(fd22b9d8-1786-4923-96f3-3db07d47e21f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-ss8dh_kube-system(fd22b9d8-1786-4923-96f3-3db07d47e21f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:10:57.494374 kubelet[2579]: E0416 02:10:57.494058 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:10:57.524862 kubelet[2579]: I0416 02:10:57.524234 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545" Apr 16 02:10:57.711864 containerd[1463]: time="2026-04-16T02:10:57.710298816Z" level=info msg="StopPodSandbox for \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\"" Apr 16 02:10:57.838480 containerd[1463]: time="2026-04-16T02:10:57.765455143Z" level=info msg="Ensure that sandbox 5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545 in task-service has been cleanup successfully" Apr 16 02:10:59.082817 kubelet[2579]: I0416 02:10:59.081888 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673" Apr 16 02:10:59.236688 containerd[1463]: time="2026-04-16T02:10:59.235407095Z" level=info msg="StopPodSandbox for \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\"" Apr 16 02:10:59.236688 containerd[1463]: time="2026-04-16T02:10:59.236636087Z" level=info msg="Ensure that sandbox 320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673 in task-service has been cleanup successfully" Apr 16 02:10:59.757749 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:32986.service - OpenSSH per-connection server daemon (10.0.0.1:32986). Apr 16 02:11:00.339721 sshd[3817]: Accepted publickey for core from 10.0.0.1 port 32986 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:00.340235 sshd[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:00.538398 systemd-logind[1442]: New session 24 of user core. Apr 16 02:11:00.545420 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 02:11:02.196380 sshd[3817]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:02.328078 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:32986.service: Deactivated successfully. Apr 16 02:11:02.357888 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 02:11:02.360215 systemd[1]: session-24.scope: Consumed 1.186s CPU time. Apr 16 02:11:02.374310 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Apr 16 02:11:02.383393 systemd-logind[1442]: Removed session 24. Apr 16 02:11:07.421160 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:33002.service - OpenSSH per-connection server daemon (10.0.0.1:33002). Apr 16 02:11:07.934832 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 33002 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:07.961278 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:08.152180 systemd-logind[1442]: New session 25 of user core. Apr 16 02:11:08.202238 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 02:11:08.913320 kernel: NET: Registered PF_ALG protocol family Apr 16 02:11:10.533318 sshd[3845]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:10.639268 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:33002.service: Deactivated successfully. Apr 16 02:11:10.687935 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 02:11:10.690128 systemd[1]: session-25.scope: Consumed 1.450s CPU time. Apr 16 02:11:10.696794 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Apr 16 02:11:10.701909 systemd-logind[1442]: Removed session 25. Apr 16 02:11:13.306082 kubelet[2579]: E0416 02:11:13.305767 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:15.561932 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:53814.service - OpenSSH per-connection server daemon (10.0.0.1:53814). Apr 16 02:11:15.774293 sshd[3876]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:15.778708 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:15.795682 systemd-logind[1442]: New session 26 of user core. Apr 16 02:11:15.806598 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 02:11:16.385984 sshd[3876]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:16.394816 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:53814.service: Deactivated successfully. Apr 16 02:11:16.402493 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 02:11:16.428830 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Apr 16 02:11:16.443793 systemd-logind[1442]: Removed session 26. Apr 16 02:11:16.724439 kubelet[2579]: E0416 02:11:16.721504 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:18.515739 kubelet[2579]: E0416 02:11:18.515338 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:21.429235 systemd[1]: Started sshd@26-10.0.0.6:22-10.0.0.1:40498.service - OpenSSH per-connection server daemon (10.0.0.1:40498). Apr 16 02:11:21.594862 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 40498 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:21.603343 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:21.625562 systemd-logind[1442]: New session 27 of user core. Apr 16 02:11:21.642650 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 16 02:11:22.237646 sshd[3992]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:22.247754 systemd[1]: sshd@26-10.0.0.6:22-10.0.0.1:40498.service: Deactivated successfully. Apr 16 02:11:22.254217 systemd[1]: session-27.scope: Deactivated successfully. Apr 16 02:11:22.261874 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Apr 16 02:11:22.271389 systemd-logind[1442]: Removed session 27. Apr 16 02:11:24.904177 systemd-networkd[1292]: lxc_health: Link UP Apr 16 02:11:24.929984 systemd-networkd[1292]: lxc_health: Gained carrier Apr 16 02:11:25.945775 containerd[1463]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:11:25.950751 containerd[1463]: time="2026-04-16T02:11:25.950438870Z" level=info msg="TearDown network for sandbox \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\" successfully" Apr 16 02:11:25.950751 containerd[1463]: time="2026-04-16T02:11:25.950740890Z" level=info msg="StopPodSandbox for \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\" returns successfully" Apr 16 02:11:25.988905 systemd[1]: run-netns-cni\x2dfbce178d\x2d1824\x2d5dd7\x2dc01a\x2dc0cd01fcc687.mount: Deactivated successfully. Apr 16 02:11:25.992501 kubelet[2579]: E0416 02:11:25.991445 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:26.052932 containerd[1463]: time="2026-04-16T02:11:26.046632528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-27d9x,Uid:5933f2cb-ae5a-47e4-91d4-0d8be9480079,Namespace:kube-system,Attempt:1,}" Apr 16 02:11:26.071131 containerd[1463]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:11:26.085809 containerd[1463]: time="2026-04-16T02:11:26.085252821Z" level=info msg="TearDown network for sandbox \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\" successfully" Apr 16 02:11:26.091470 containerd[1463]: time="2026-04-16T02:11:26.091072261Z" level=info msg="StopPodSandbox for \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\" returns successfully" Apr 16 02:11:26.096399 systemd[1]: run-netns-cni\x2db5ae8993\x2d270d\x2d7440\x2ddcb9\x2dbf9e900799ff.mount: Deactivated successfully. Apr 16 02:11:26.127103 kubelet[2579]: E0416 02:11:26.126911 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:26.226242 containerd[1463]: time="2026-04-16T02:11:26.225557031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ss8dh,Uid:fd22b9d8-1786-4923-96f3-3db07d47e21f,Namespace:kube-system,Attempt:1,}" Apr 16 02:11:26.253288 systemd-networkd[1292]: lxc_health: Gained IPv6LL Apr 16 02:11:26.286862 kubelet[2579]: E0416 02:11:26.286723 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:26.407237 kubelet[2579]: I0416 02:11:26.402379 2579 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-b5rcs" podStartSLOduration=117.899700107 podStartE2EDuration="3m13.397745441s" podCreationTimestamp="2026-04-16 02:08:13 +0000 UTC" firstStartedPulling="2026-04-16 02:08:14.630393153 +0000 UTC m=+6.640164197" lastFinishedPulling="2026-04-16 02:09:30.128438473 +0000 UTC m=+82.138209531" observedRunningTime="2026-04-16 02:09:55.386457347 +0000 UTC m=+107.396228422" watchObservedRunningTime="2026-04-16 02:11:26.397745441 +0000 UTC m=+198.407516504" Apr 16 02:11:26.580403 systemd-networkd[1292]: lxccd55fb95ba0b: Link UP Apr 16 02:11:26.618505 kernel: eth0: renamed from tmp9dae3 Apr 16 02:11:26.631944 systemd-networkd[1292]: lxccd55fb95ba0b: Gained carrier Apr 16 02:11:26.786241 systemd-networkd[1292]: lxc89b942108b26: Link UP Apr 16 02:11:26.814874 kernel: eth0: renamed from tmpe8b5a Apr 16 02:11:26.829248 systemd-networkd[1292]: lxc89b942108b26: Gained carrier Apr 16 02:11:27.097018 kubelet[2579]: E0416 02:11:27.087472 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:11:27.321513 systemd[1]: Started sshd@27-10.0.0.6:22-10.0.0.1:40510.service - OpenSSH per-connection server daemon (10.0.0.1:40510). Apr 16 02:11:27.461460 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 40510 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:27.530437 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:27.545559 systemd-logind[1442]: New session 28 of user core. Apr 16 02:11:27.561490 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 16 02:11:27.917040 systemd-networkd[1292]: lxccd55fb95ba0b: Gained IPv6LL Apr 16 02:11:27.961465 sshd[4196]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:28.044943 systemd-logind[1442]: Session 28 logged out. Waiting for processes to exit. Apr 16 02:11:28.055303 systemd[1]: sshd@27-10.0.0.6:22-10.0.0.1:40510.service: Deactivated successfully. Apr 16 02:11:28.081195 systemd[1]: session-28.scope: Deactivated successfully. Apr 16 02:11:28.086959 systemd-logind[1442]: Removed session 28. Apr 16 02:11:28.368662 systemd-networkd[1292]: lxc89b942108b26: Gained IPv6LL Apr 16 02:11:33.043365 systemd[1]: Started sshd@28-10.0.0.6:22-10.0.0.1:51236.service - OpenSSH per-connection server daemon (10.0.0.1:51236). Apr 16 02:11:33.210739 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 51236 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:33.218489 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:33.246865 systemd-logind[1442]: New session 29 of user core. Apr 16 02:11:33.264697 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 16 02:11:33.692935 sshd[4216]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:33.700642 systemd[1]: sshd@28-10.0.0.6:22-10.0.0.1:51236.service: Deactivated successfully. Apr 16 02:11:33.709450 systemd[1]: session-29.scope: Deactivated successfully. Apr 16 02:11:33.729168 systemd-logind[1442]: Session 29 logged out. Waiting for processes to exit. Apr 16 02:11:33.741854 systemd-logind[1442]: Removed session 29. Apr 16 02:11:38.735114 systemd[1]: Started sshd@29-10.0.0.6:22-10.0.0.1:51242.service - OpenSSH per-connection server daemon (10.0.0.1:51242). Apr 16 02:11:38.923777 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 51242 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:38.963370 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:39.053867 systemd-logind[1442]: New session 30 of user core. Apr 16 02:11:39.086993 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 16 02:11:40.093209 sshd[4233]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:40.115476 systemd-logind[1442]: Session 30 logged out. Waiting for processes to exit. Apr 16 02:11:40.116672 systemd[1]: sshd@29-10.0.0.6:22-10.0.0.1:51242.service: Deactivated successfully. Apr 16 02:11:40.131031 systemd[1]: session-30.scope: Deactivated successfully. Apr 16 02:11:40.162426 systemd-logind[1442]: Removed session 30. Apr 16 02:11:45.181384 systemd[1]: Started sshd@30-10.0.0.6:22-10.0.0.1:45218.service - OpenSSH per-connection server daemon (10.0.0.1:45218). Apr 16 02:11:45.389203 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 45218 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:45.391707 sshd[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:45.412243 systemd-logind[1442]: New session 31 of user core. Apr 16 02:11:45.428138 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 16 02:11:45.956024 sshd[4249]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:45.968482 systemd[1]: sshd@30-10.0.0.6:22-10.0.0.1:45218.service: Deactivated successfully. Apr 16 02:11:45.977595 systemd[1]: session-31.scope: Deactivated successfully. Apr 16 02:11:45.982196 systemd-logind[1442]: Session 31 logged out. Waiting for processes to exit. Apr 16 02:11:46.001316 systemd[1]: Started sshd@31-10.0.0.6:22-10.0.0.1:45230.service - OpenSSH per-connection server daemon (10.0.0.1:45230). Apr 16 02:11:46.005191 systemd-logind[1442]: Removed session 31. Apr 16 02:11:46.220931 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 45230 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:46.223821 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:46.240350 systemd-logind[1442]: New session 32 of user core. Apr 16 02:11:46.256427 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 16 02:11:47.133962 sshd[4266]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:47.156048 systemd[1]: sshd@31-10.0.0.6:22-10.0.0.1:45230.service: Deactivated successfully. Apr 16 02:11:47.165904 systemd[1]: session-32.scope: Deactivated successfully. Apr 16 02:11:47.175296 systemd-logind[1442]: Session 32 logged out. Waiting for processes to exit. Apr 16 02:11:47.185988 systemd[1]: Started sshd@32-10.0.0.6:22-10.0.0.1:45246.service - OpenSSH per-connection server daemon (10.0.0.1:45246). Apr 16 02:11:47.188253 systemd-logind[1442]: Removed session 32. Apr 16 02:11:47.383649 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 45246 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:47.386096 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:47.412457 systemd-logind[1442]: New session 33 of user core. Apr 16 02:11:47.425818 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 16 02:11:51.489172 sshd[4278]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:51.511725 systemd[1]: sshd@32-10.0.0.6:22-10.0.0.1:45246.service: Deactivated successfully. Apr 16 02:11:51.546280 systemd[1]: session-33.scope: Deactivated successfully. Apr 16 02:11:51.546642 systemd[1]: session-33.scope: Consumed 2.630s CPU time. Apr 16 02:11:51.549163 systemd-logind[1442]: Session 33 logged out. Waiting for processes to exit. Apr 16 02:11:51.606836 systemd[1]: Started sshd@33-10.0.0.6:22-10.0.0.1:36570.service - OpenSSH per-connection server daemon (10.0.0.1:36570). Apr 16 02:11:51.615809 systemd-logind[1442]: Removed session 33. Apr 16 02:11:51.696140 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 36570 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:51.701221 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:51.740809 systemd-logind[1442]: New session 34 of user core. Apr 16 02:11:51.749813 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 16 02:11:54.531758 sshd[4298]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:54.551315 systemd[1]: sshd@33-10.0.0.6:22-10.0.0.1:36570.service: Deactivated successfully. Apr 16 02:11:54.650866 systemd[1]: session-34.scope: Deactivated successfully. Apr 16 02:11:54.651345 systemd[1]: session-34.scope: Consumed 2.109s CPU time. Apr 16 02:11:54.656267 systemd-logind[1442]: Session 34 logged out. Waiting for processes to exit. Apr 16 02:11:54.678830 systemd[1]: Started sshd@34-10.0.0.6:22-10.0.0.1:36574.service - OpenSSH per-connection server daemon (10.0.0.1:36574). Apr 16 02:11:54.686962 systemd-logind[1442]: Removed session 34. Apr 16 02:11:54.806138 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 36574 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:11:54.811614 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:11:54.943268 systemd-logind[1442]: New session 35 of user core. Apr 16 02:11:55.046211 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 16 02:11:55.698396 sshd[4313]: pam_unix(sshd:session): session closed for user core Apr 16 02:11:55.708607 systemd-logind[1442]: Session 35 logged out. Waiting for processes to exit. Apr 16 02:11:55.710007 systemd[1]: sshd@34-10.0.0.6:22-10.0.0.1:36574.service: Deactivated successfully. Apr 16 02:11:55.713852 systemd[1]: session-35.scope: Deactivated successfully. Apr 16 02:11:55.725601 systemd-logind[1442]: Removed session 35. Apr 16 02:11:59.478701 kubelet[2579]: E0416 02:11:59.478322 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:00.468829 kubelet[2579]: E0416 02:12:00.467311 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:00.816732 systemd[1]: Started sshd@35-10.0.0.6:22-10.0.0.1:60436.service - OpenSSH per-connection server daemon (10.0.0.1:60436). Apr 16 02:12:00.985695 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 60436 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:00.989808 sshd[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:01.127422 systemd-logind[1442]: New session 36 of user core. Apr 16 02:12:01.133042 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 16 02:12:02.733897 sshd[4327]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:02.796379 systemd-logind[1442]: Session 36 logged out. Waiting for processes to exit. Apr 16 02:12:02.802772 systemd[1]: sshd@35-10.0.0.6:22-10.0.0.1:60436.service: Deactivated successfully. Apr 16 02:12:02.819969 systemd[1]: session-36.scope: Deactivated successfully. Apr 16 02:12:02.822863 systemd[1]: session-36.scope: Consumed 1.142s CPU time. Apr 16 02:12:02.904357 systemd-logind[1442]: Removed session 36. Apr 16 02:12:07.832750 systemd[1]: Started sshd@36-10.0.0.6:22-10.0.0.1:60444.service - OpenSSH per-connection server daemon (10.0.0.1:60444). Apr 16 02:12:07.964018 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 60444 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:08.064316 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:08.118864 systemd-logind[1442]: New session 37 of user core. Apr 16 02:12:08.132869 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 16 02:12:08.689779 sshd[4343]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:08.697369 systemd[1]: sshd@36-10.0.0.6:22-10.0.0.1:60444.service: Deactivated successfully. Apr 16 02:12:08.706329 systemd[1]: session-37.scope: Deactivated successfully. Apr 16 02:12:08.716027 systemd-logind[1442]: Session 37 logged out. Waiting for processes to exit. Apr 16 02:12:08.723199 systemd-logind[1442]: Removed session 37. Apr 16 02:12:09.466399 kubelet[2579]: E0416 02:12:09.465807 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:13.725066 systemd[1]: Started sshd@37-10.0.0.6:22-10.0.0.1:43904.service - OpenSSH per-connection server daemon (10.0.0.1:43904). Apr 16 02:12:13.925204 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 43904 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:13.932055 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:13.975954 systemd-logind[1442]: New session 38 of user core. Apr 16 02:12:13.991281 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 16 02:12:14.754104 sshd[4360]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:14.826861 systemd[1]: sshd@37-10.0.0.6:22-10.0.0.1:43904.service: Deactivated successfully. Apr 16 02:12:14.832894 systemd[1]: session-38.scope: Deactivated successfully. Apr 16 02:12:14.838071 systemd-logind[1442]: Session 38 logged out. Waiting for processes to exit. Apr 16 02:12:14.844451 systemd-logind[1442]: Removed session 38. Apr 16 02:12:19.786976 systemd[1]: Started sshd@38-10.0.0.6:22-10.0.0.1:37724.service - OpenSSH per-connection server daemon (10.0.0.1:37724). Apr 16 02:12:19.895210 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 37724 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:19.914980 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:20.020803 systemd-logind[1442]: New session 39 of user core. Apr 16 02:12:20.037648 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 16 02:12:20.739422 sshd[4377]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:20.746275 systemd[1]: sshd@38-10.0.0.6:22-10.0.0.1:37724.service: Deactivated successfully. Apr 16 02:12:20.773360 systemd[1]: session-39.scope: Deactivated successfully. Apr 16 02:12:20.777457 systemd-logind[1442]: Session 39 logged out. Waiting for processes to exit. Apr 16 02:12:20.785175 systemd-logind[1442]: Removed session 39. Apr 16 02:12:25.835898 systemd[1]: Started sshd@39-10.0.0.6:22-10.0.0.1:37726.service - OpenSSH per-connection server daemon (10.0.0.1:37726). Apr 16 02:12:25.916741 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 37726 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:25.922056 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:25.944319 systemd-logind[1442]: New session 40 of user core. Apr 16 02:12:25.954887 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 16 02:12:26.604022 kubelet[2579]: E0416 02:12:26.602266 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:26.879989 sshd[4394]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:26.937150 systemd[1]: sshd@39-10.0.0.6:22-10.0.0.1:37726.service: Deactivated successfully. Apr 16 02:12:27.038842 systemd[1]: session-40.scope: Deactivated successfully. Apr 16 02:12:27.054908 systemd-logind[1442]: Session 40 logged out. Waiting for processes to exit. Apr 16 02:12:27.065317 systemd-logind[1442]: Removed session 40. Apr 16 02:12:31.942282 systemd[1]: Started sshd@40-10.0.0.6:22-10.0.0.1:55986.service - OpenSSH per-connection server daemon (10.0.0.1:55986). Apr 16 02:12:32.329046 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 55986 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:32.332011 sshd[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:32.349886 systemd-logind[1442]: New session 41 of user core. Apr 16 02:12:32.358323 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 16 02:12:32.914638 sshd[4408]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:32.954036 systemd[1]: sshd@40-10.0.0.6:22-10.0.0.1:55986.service: Deactivated successfully. Apr 16 02:12:33.003384 systemd[1]: session-41.scope: Deactivated successfully. Apr 16 02:12:33.017025 systemd-logind[1442]: Session 41 logged out. Waiting for processes to exit. Apr 16 02:12:33.019449 systemd-logind[1442]: Removed session 41. Apr 16 02:12:35.471998 kubelet[2579]: E0416 02:12:35.471494 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:37.473767 kubelet[2579]: E0416 02:12:37.473497 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:38.032750 systemd[1]: Started sshd@41-10.0.0.6:22-10.0.0.1:55992.service - OpenSSH per-connection server daemon (10.0.0.1:55992). Apr 16 02:12:38.187056 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 55992 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:38.199795 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:38.251180 systemd-logind[1442]: New session 42 of user core. Apr 16 02:12:38.312419 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 16 02:12:38.912466 sshd[4423]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:38.937029 systemd[1]: sshd@41-10.0.0.6:22-10.0.0.1:55992.service: Deactivated successfully. Apr 16 02:12:38.945321 systemd[1]: session-42.scope: Deactivated successfully. Apr 16 02:12:38.949392 systemd-logind[1442]: Session 42 logged out. Waiting for processes to exit. Apr 16 02:12:38.995735 systemd[1]: Started sshd@42-10.0.0.6:22-10.0.0.1:55994.service - OpenSSH per-connection server daemon (10.0.0.1:55994). Apr 16 02:12:39.001939 systemd-logind[1442]: Removed session 42. Apr 16 02:12:39.197860 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 55994 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:39.199961 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:39.215432 systemd-logind[1442]: New session 43 of user core. Apr 16 02:12:39.223818 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 16 02:12:43.237847 containerd[1463]: time="2026-04-16T02:12:43.234851606Z" level=info msg="StopContainer for \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\" with timeout 30 (s)" Apr 16 02:12:43.253816 containerd[1463]: time="2026-04-16T02:12:43.249390855Z" level=info msg="Stop container \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\" with signal terminated" Apr 16 02:12:43.518455 systemd[1]: cri-containerd-e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5.scope: Deactivated successfully. Apr 16 02:12:43.523514 systemd[1]: cri-containerd-e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5.scope: Consumed 10.689s CPU time. Apr 16 02:12:43.912767 containerd[1463]: time="2026-04-16T02:12:43.907838518Z" level=info msg="StopContainer for \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\" with timeout 2 (s)" Apr 16 02:12:43.958874 containerd[1463]: time="2026-04-16T02:12:43.958099920Z" level=info msg="Stop container \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\" with signal terminated" Apr 16 02:12:44.463841 sshd[4437]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:44.527068 systemd-networkd[1292]: lxc_health: Link DOWN Apr 16 02:12:44.527079 systemd-networkd[1292]: lxc_health: Lost carrier Apr 16 02:12:44.538345 systemd[1]: sshd@42-10.0.0.6:22-10.0.0.1:55994.service: Deactivated successfully. Apr 16 02:12:44.575744 systemd[1]: session-43.scope: Deactivated successfully. Apr 16 02:12:44.577835 systemd[1]: session-43.scope: Consumed 2.913s CPU time. Apr 16 02:12:44.610940 systemd-logind[1442]: Session 43 logged out. Waiting for processes to exit. Apr 16 02:12:44.725980 systemd[1]: Started sshd@43-10.0.0.6:22-10.0.0.1:58858.service - OpenSSH per-connection server daemon (10.0.0.1:58858). Apr 16 02:12:44.747780 systemd-logind[1442]: Removed session 43. Apr 16 02:12:44.793145 containerd[1463]: time="2026-04-16T02:12:44.591594850Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 02:12:45.201847 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 58858 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:45.206702 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:45.311046 systemd-logind[1442]: New session 44 of user core. Apr 16 02:12:45.331290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5-rootfs.mount: Deactivated successfully. Apr 16 02:12:45.397831 containerd[1463]: time="2026-04-16T02:12:45.396097361Z" level=info msg="shim disconnected" id=e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5 namespace=k8s.io Apr 16 02:12:45.397831 containerd[1463]: time="2026-04-16T02:12:45.396456262Z" level=warning msg="cleaning up after shim disconnected" id=e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5 namespace=k8s.io Apr 16 02:12:45.397831 containerd[1463]: time="2026-04-16T02:12:45.396467841Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:12:45.430486 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 16 02:12:46.032713 containerd[1463]: time="2026-04-16T02:12:46.031906499Z" level=warning msg="cleanup warnings time=\"2026-04-16T02:12:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 16 02:12:46.072128 containerd[1463]: time="2026-04-16T02:12:46.072046106Z" level=info msg="StopContainer for \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\" returns successfully" Apr 16 02:12:46.093425 containerd[1463]: time="2026-04-16T02:12:46.093212078Z" level=info msg="StopPodSandbox for \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\"" Apr 16 02:12:46.097498 containerd[1463]: time="2026-04-16T02:12:46.096913466Z" level=info msg="Container to stop \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:12:46.107139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b-shm.mount: Deactivated successfully. Apr 16 02:12:46.153595 systemd[1]: cri-containerd-dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b.scope: Deactivated successfully. Apr 16 02:12:46.326618 containerd[1463]: time="2026-04-16T02:12:46.325979104Z" level=info msg="Kill container \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\"" Apr 16 02:12:46.401164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b-rootfs.mount: Deactivated successfully. Apr 16 02:12:46.410118 systemd-networkd[1292]: lxccd55fb95ba0b: Link DOWN Apr 16 02:12:46.410126 systemd-networkd[1292]: lxccd55fb95ba0b: Lost carrier Apr 16 02:12:46.413975 systemd[1]: cri-containerd-631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440.scope: Deactivated successfully. Apr 16 02:12:46.414597 systemd[1]: cri-containerd-631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440.scope: Consumed 1min 58.495s CPU time. Apr 16 02:12:46.429963 systemd-networkd[1292]: lxc89b942108b26: Link DOWN Apr 16 02:12:46.429969 systemd-networkd[1292]: lxc89b942108b26: Lost carrier Apr 16 02:12:46.593039 containerd[1463]: time="2026-04-16T02:12:46.586780170Z" level=error msg="Failed to destroy network for sandbox \"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\"" error="cni plugin not initialized" Apr 16 02:12:46.607753 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56-shm.mount: Deactivated successfully. Apr 16 02:12:46.637874 containerd[1463]: time="2026-04-16T02:12:46.617083158Z" level=error msg="encountered an error cleaning up failed sandbox \"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\", marking sandbox state as SANDBOX_UNKNOWN" error="cni plugin not initialized" Apr 16 02:12:46.669053 containerd[1463]: time="2026-04-16T02:12:46.663598560Z" level=error msg="Failed to destroy network for sandbox \"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\"" error="cni plugin not initialized" Apr 16 02:12:46.669053 containerd[1463]: time="2026-04-16T02:12:46.664315018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ss8dh,Uid:fd22b9d8-1786-4923-96f3-3db07d47e21f,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Apr 16 02:12:46.739010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578-shm.mount: Deactivated successfully. Apr 16 02:12:46.762395 containerd[1463]: time="2026-04-16T02:12:46.739507096Z" level=info msg="shim disconnected" id=dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b namespace=k8s.io Apr 16 02:12:46.763121 containerd[1463]: time="2026-04-16T02:12:46.739842829Z" level=error msg="encountered an error cleaning up failed sandbox \"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\", marking sandbox state as SANDBOX_UNKNOWN" error="cni plugin not initialized" Apr 16 02:12:46.764423 containerd[1463]: time="2026-04-16T02:12:46.764273091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-27d9x,Uid:5933f2cb-ae5a-47e4-91d4-0d8be9480079,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Apr 16 02:12:46.776884 containerd[1463]: time="2026-04-16T02:12:46.772289870Z" level=warning msg="cleaning up after shim disconnected" id=dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b namespace=k8s.io Apr 16 02:12:46.776884 containerd[1463]: time="2026-04-16T02:12:46.775424708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:12:46.782393 kubelet[2579]: E0416 02:12:46.781699 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Apr 16 02:12:46.786054 kubelet[2579]: E0416 02:12:46.779312 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Apr 16 02:12:46.789705 kubelet[2579]: E0416 02:12:46.782503 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-7d764666f9-27d9x" Apr 16 02:12:46.789705 kubelet[2579]: E0416 02:12:46.787711 2579 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-7d764666f9-27d9x" Apr 16 02:12:46.791140 kubelet[2579]: E0416 02:12:46.790034 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-7d764666f9-ss8dh" Apr 16 02:12:46.791140 kubelet[2579]: E0416 02:12:46.790823 2579 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-7d764666f9-ss8dh" Apr 16 02:12:46.794705 kubelet[2579]: E0416 02:12:46.793915 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-27d9x_kube-system(5933f2cb-ae5a-47e4-91d4-0d8be9480079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-27d9x_kube-system(5933f2cb-ae5a-47e4-91d4-0d8be9480079)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Put \\\"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\\\": EOF\"" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:12:46.806664 kubelet[2579]: E0416 02:12:46.805300 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-ss8dh_kube-system(fd22b9d8-1786-4923-96f3-3db07d47e21f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-ss8dh_kube-system(fd22b9d8-1786-4923-96f3-3db07d47e21f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Put \\\"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\\\": EOF\"" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:12:46.928594 containerd[1463]: time="2026-04-16T02:12:46.924045547Z" level=warning msg="cleanup warnings time=\"2026-04-16T02:12:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 16 02:12:46.947083 containerd[1463]: time="2026-04-16T02:12:46.945115379Z" level=info msg="TearDown network for sandbox \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\" successfully" Apr 16 02:12:46.947083 containerd[1463]: time="2026-04-16T02:12:46.945375303Z" level=info msg="StopPodSandbox for \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\" returns successfully" Apr 16 02:12:47.040903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440-rootfs.mount: Deactivated successfully. Apr 16 02:12:47.080504 containerd[1463]: time="2026-04-16T02:12:47.080099035Z" level=info msg="shim disconnected" id=631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440 namespace=k8s.io Apr 16 02:12:47.080504 containerd[1463]: time="2026-04-16T02:12:47.080355505Z" level=warning msg="cleaning up after shim disconnected" id=631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440 namespace=k8s.io Apr 16 02:12:47.080504 containerd[1463]: time="2026-04-16T02:12:47.080364442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:12:47.301787 kubelet[2579]: E0416 02:12:47.296034 2579 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:12:47.302656 kubelet[2579]: E0416 02:12:47.302013 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:12:47.305836 kubelet[2579]: I0416 02:12:47.304688 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578" Apr 16 02:12:47.336505 kubelet[2579]: I0416 02:12:47.333114 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-cilium-config-path\") pod \"c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6\" (UID: \"c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6\") " Apr 16 02:12:47.339495 kubelet[2579]: I0416 02:12:47.337296 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-kube-api-access-k5jkz\" (UniqueName: \"kubernetes.io/projected/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-kube-api-access-k5jkz\") pod \"c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6\" (UID: \"c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6\") " Apr 16 02:12:47.429505 kubelet[2579]: I0416 02:12:47.424039 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-kube-api-access-k5jkz" pod "c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6" (UID: "c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6"). InnerVolumeSpecName "kube-api-access-k5jkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 02:12:47.454467 kubelet[2579]: I0416 02:12:47.452834 2579 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k5jkz\" (UniqueName: \"kubernetes.io/projected/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-kube-api-access-k5jkz\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:47.513011 kubelet[2579]: I0416 02:12:47.510922 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-cilium-config-path" pod "c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6" (UID: "c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 02:12:47.493005 systemd[1]: var-lib-kubelet-pods-c2bbf29f\x2df4dc\x2d4f9c\x2db793\x2de58b0fe596d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk5jkz.mount: Deactivated successfully. Apr 16 02:12:47.525757 containerd[1463]: time="2026-04-16T02:12:47.517980139Z" level=info msg="StopContainer for \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\" returns successfully" Apr 16 02:12:47.546827 containerd[1463]: time="2026-04-16T02:12:47.546473899Z" level=info msg="StopPodSandbox for \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\"" Apr 16 02:12:47.572794 kubelet[2579]: I0416 02:12:47.564373 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:47.579056 containerd[1463]: time="2026-04-16T02:12:47.576609784Z" level=info msg="Container to stop \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:12:47.579391 containerd[1463]: time="2026-04-16T02:12:47.579370639Z" level=info msg="Container to stop \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:12:47.579498 containerd[1463]: time="2026-04-16T02:12:47.579488705Z" level=info msg="Container to stop \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:12:47.579605 containerd[1463]: time="2026-04-16T02:12:47.579597346Z" level=info msg="Container to stop \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:12:47.580349 kubelet[2579]: I0416 02:12:47.579877 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56" Apr 16 02:12:47.580349 kubelet[2579]: E0416 02:12:47.579961 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:12:47.580475 containerd[1463]: time="2026-04-16T02:12:47.580457999Z" level=info msg="Container to stop \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 02:12:47.592123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217-shm.mount: Deactivated successfully. Apr 16 02:12:47.600621 kubelet[2579]: I0416 02:12:47.598663 2579 scope.go:122] "RemoveContainer" containerID="e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5" Apr 16 02:12:47.620422 systemd[1]: Removed slice kubepods-besteffort-podc2bbf29f_f4dc_4f9c_b793_e58b0fe596d6.slice - libcontainer container kubepods-besteffort-podc2bbf29f_f4dc_4f9c_b793_e58b0fe596d6.slice. Apr 16 02:12:47.620762 systemd[1]: kubepods-besteffort-podc2bbf29f_f4dc_4f9c_b793_e58b0fe596d6.slice: Consumed 10.721s CPU time. Apr 16 02:12:47.728669 systemd[1]: cri-containerd-61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217.scope: Deactivated successfully. Apr 16 02:12:47.777461 containerd[1463]: time="2026-04-16T02:12:47.777104908Z" level=info msg="RemoveContainer for \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\"" Apr 16 02:12:47.824508 containerd[1463]: time="2026-04-16T02:12:47.820930899Z" level=info msg="RemoveContainer for \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\" returns successfully" Apr 16 02:12:47.875115 kubelet[2579]: I0416 02:12:47.872914 2579 scope.go:122] "RemoveContainer" containerID="e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5" Apr 16 02:12:47.887748 containerd[1463]: time="2026-04-16T02:12:47.886627214Z" level=error msg="ContainerStatus for \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\": not found" Apr 16 02:12:47.888051 kubelet[2579]: E0416 02:12:47.887415 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\": not found" containerID="e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5" Apr 16 02:12:47.888453 kubelet[2579]: I0416 02:12:47.887494 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5"} err="failed to get container status \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e844ad9f44ff4142ab1237ae88e2712544c0220c18b4863fd05baa0047298fe5\": not found" Apr 16 02:12:48.050192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217-rootfs.mount: Deactivated successfully. Apr 16 02:12:48.066931 containerd[1463]: time="2026-04-16T02:12:48.052975901Z" level=info msg="shim disconnected" id=61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217 namespace=k8s.io Apr 16 02:12:48.081143 containerd[1463]: time="2026-04-16T02:12:48.065425138Z" level=warning msg="cleaning up after shim disconnected" id=61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217 namespace=k8s.io Apr 16 02:12:48.085098 containerd[1463]: time="2026-04-16T02:12:48.081756580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:12:48.323594 containerd[1463]: time="2026-04-16T02:12:48.323220756Z" level=info msg="TearDown network for sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" successfully" Apr 16 02:12:48.323594 containerd[1463]: time="2026-04-16T02:12:48.323464333Z" level=info msg="StopPodSandbox for \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" returns successfully" Apr 16 02:12:48.553989 kubelet[2579]: I0416 02:12:48.549459 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cni-path\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cni-path\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.553989 kubelet[2579]: I0416 02:12:48.549641 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-kernel\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.553989 kubelet[2579]: I0416 02:12:48.549738 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/3d917cf3-4394-48cb-a90e-a40e12c6e709-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d917cf3-4394-48cb-a90e-a40e12c6e709-clustermesh-secrets\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.553989 kubelet[2579]: I0416 02:12:48.549760 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-net\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.553989 kubelet[2579]: I0416 02:12:48.549783 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-kube-api-access-v57cm\" (UniqueName: \"kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-kube-api-access-v57cm\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556157 kubelet[2579]: I0416 02:12:48.549843 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-cgroup\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556157 kubelet[2579]: I0416 02:12:48.549867 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-hubble-tls\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556157 kubelet[2579]: I0416 02:12:48.549888 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-hostproc\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-hostproc\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556157 kubelet[2579]: I0416 02:12:48.549910 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-xtables-lock\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556157 kubelet[2579]: I0416 02:12:48.549954 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-etc-cni-netd\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556412 kubelet[2579]: I0416 02:12:48.549977 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-bpf-maps\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556412 kubelet[2579]: I0416 02:12:48.549998 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-config-path\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556412 kubelet[2579]: I0416 02:12:48.550019 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-lib-modules\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556412 kubelet[2579]: I0416 02:12:48.550083 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-run\") pod \"3d917cf3-4394-48cb-a90e-a40e12c6e709\" (UID: \"3d917cf3-4394-48cb-a90e-a40e12c6e709\") " Apr 16 02:12:48.556412 kubelet[2579]: I0416 02:12:48.552047 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-run" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.574643 kubelet[2579]: I0416 02:12:48.555913 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cni-path" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.574643 kubelet[2579]: I0416 02:12:48.556411 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-kernel" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.574643 kubelet[2579]: I0416 02:12:48.566780 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-hostproc" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.574643 kubelet[2579]: I0416 02:12:48.567019 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-net" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.582728 kubelet[2579]: I0416 02:12:48.579089 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-cgroup" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.582728 kubelet[2579]: I0416 02:12:48.579052 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-xtables-lock" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.617895 kubelet[2579]: I0416 02:12:48.604143 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-etc-cni-netd" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.793082 systemd[1]: var-lib-kubelet-pods-3d917cf3\x2d4394\x2d48cb\x2da90e\x2da40e12c6e709-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 16 02:12:48.819139 kubelet[2579]: I0416 02:12:48.802504 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-bpf-maps" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.819139 kubelet[2579]: I0416 02:12:48.733901 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-lib-modules" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 02:12:48.821123 kubelet[2579]: I0416 02:12:48.819880 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-hubble-tls" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 02:12:48.821123 kubelet[2579]: I0416 02:12:48.819963 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-config-path" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 02:12:48.821123 kubelet[2579]: I0416 02:12:48.821086 2579 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.821123 kubelet[2579]: I0416 02:12:48.821107 2579 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.821123 kubelet[2579]: I0416 02:12:48.821116 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.821332 kubelet[2579]: I0416 02:12:48.821163 2579 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.821332 kubelet[2579]: I0416 02:12:48.821173 2579 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.821332 kubelet[2579]: I0416 02:12:48.821181 2579 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.821332 kubelet[2579]: I0416 02:12:48.821187 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.821332 kubelet[2579]: I0416 02:12:48.821193 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.821332 kubelet[2579]: I0416 02:12:48.821198 2579 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.830753 kubelet[2579]: I0416 02:12:48.829438 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-kube-api-access-v57cm" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "kube-api-access-v57cm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 02:12:48.851069 systemd[1]: var-lib-kubelet-pods-3d917cf3\x2d4394\x2d48cb\x2da90e\x2da40e12c6e709-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv57cm.mount: Deactivated successfully. Apr 16 02:12:48.942795 kubelet[2579]: I0416 02:12:48.942447 2579 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.948745 kubelet[2579]: I0416 02:12:48.947109 2579 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v57cm\" (UniqueName: \"kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-kube-api-access-v57cm\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.948745 kubelet[2579]: I0416 02:12:48.948331 2579 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d917cf3-4394-48cb-a90e-a40e12c6e709-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:48.948745 kubelet[2579]: I0416 02:12:48.948371 2579 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d917cf3-4394-48cb-a90e-a40e12c6e709-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:49.021655 kubelet[2579]: I0416 02:12:49.004067 2579 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d917cf3-4394-48cb-a90e-a40e12c6e709-clustermesh-secrets" pod "3d917cf3-4394-48cb-a90e-a40e12c6e709" (UID: "3d917cf3-4394-48cb-a90e-a40e12c6e709"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 02:12:49.023863 systemd[1]: var-lib-kubelet-pods-3d917cf3\x2d4394\x2d48cb\x2da90e\x2da40e12c6e709-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 16 02:12:49.052850 kubelet[2579]: I0416 02:12:49.052111 2579 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d917cf3-4394-48cb-a90e-a40e12c6e709-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 16 02:12:49.064190 kubelet[2579]: I0416 02:12:49.063299 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6" path="/var/lib/kubelet/pods/c2bbf29f-f4dc-4f9c-b793-e58b0fe596d6/volumes" Apr 16 02:12:49.118286 systemd[1]: Removed slice kubepods-burstable-pod3d917cf3_4394_48cb_a90e_a40e12c6e709.slice - libcontainer container kubepods-burstable-pod3d917cf3_4394_48cb_a90e_a40e12c6e709.slice. Apr 16 02:12:49.121429 systemd[1]: kubepods-burstable-pod3d917cf3_4394_48cb_a90e_a40e12c6e709.slice: Consumed 1min 59.223s CPU time. Apr 16 02:12:49.354009 kubelet[2579]: I0416 02:12:49.352337 2579 scope.go:122] "RemoveContainer" containerID="631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440" Apr 16 02:12:49.354009 kubelet[2579]: E0416 02:12:49.352398 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:12:49.361918 kubelet[2579]: E0416 02:12:49.361112 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:12:49.400502 containerd[1463]: time="2026-04-16T02:12:49.396320297Z" level=info msg="RemoveContainer for \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\"" Apr 16 02:12:49.514982 containerd[1463]: time="2026-04-16T02:12:49.514673186Z" level=info msg="RemoveContainer for \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\" returns successfully" Apr 16 02:12:49.546649 kubelet[2579]: I0416 02:12:49.544089 2579 scope.go:122] "RemoveContainer" containerID="e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d" Apr 16 02:12:49.641948 containerd[1463]: time="2026-04-16T02:12:49.640738215Z" level=info msg="RemoveContainer for \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\"" Apr 16 02:12:49.749361 containerd[1463]: time="2026-04-16T02:12:49.747842184Z" level=info msg="RemoveContainer for \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\" returns successfully" Apr 16 02:12:49.765117 kubelet[2579]: I0416 02:12:49.763945 2579 scope.go:122] "RemoveContainer" containerID="61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de" Apr 16 02:12:49.814724 containerd[1463]: time="2026-04-16T02:12:49.814143568Z" level=info msg="RemoveContainer for \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\"" Apr 16 02:12:49.835758 containerd[1463]: time="2026-04-16T02:12:49.835170256Z" level=info msg="RemoveContainer for \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\" returns successfully" Apr 16 02:12:49.855495 kubelet[2579]: I0416 02:12:49.854712 2579 scope.go:122] "RemoveContainer" containerID="437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a" Apr 16 02:12:49.877169 containerd[1463]: time="2026-04-16T02:12:49.876967059Z" level=info msg="RemoveContainer for \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\"" Apr 16 02:12:49.903197 containerd[1463]: time="2026-04-16T02:12:49.903046911Z" level=info msg="RemoveContainer for \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\" returns successfully" Apr 16 02:12:49.917500 kubelet[2579]: I0416 02:12:49.916077 2579 scope.go:122] "RemoveContainer" containerID="b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545" Apr 16 02:12:49.964400 containerd[1463]: time="2026-04-16T02:12:49.964176735Z" level=info msg="RemoveContainer for \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\"" Apr 16 02:12:49.989333 containerd[1463]: time="2026-04-16T02:12:49.989137980Z" level=info msg="RemoveContainer for \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\" returns successfully" Apr 16 02:12:49.991413 kubelet[2579]: I0416 02:12:49.991126 2579 scope.go:122] "RemoveContainer" containerID="631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440" Apr 16 02:12:50.013618 containerd[1463]: time="2026-04-16T02:12:50.012931476Z" level=error msg="ContainerStatus for \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\": not found" Apr 16 02:12:50.016787 kubelet[2579]: E0416 02:12:50.016327 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\": not found" containerID="631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440" Apr 16 02:12:50.016787 kubelet[2579]: I0416 02:12:50.016779 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440"} err="failed to get container status \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\": rpc error: code = NotFound desc = an error occurred when try to find container \"631504407de88aba1f0bfefca16b8389072de2dc5da9a8929bf37e325628f440\": not found" Apr 16 02:12:50.016787 kubelet[2579]: I0416 02:12:50.016846 2579 scope.go:122] "RemoveContainer" containerID="e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d" Apr 16 02:12:50.132136 containerd[1463]: time="2026-04-16T02:12:50.131416154Z" level=error msg="ContainerStatus for \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\": not found" Apr 16 02:12:50.137465 kubelet[2579]: E0416 02:12:50.135947 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\": not found" containerID="e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d" Apr 16 02:12:50.137465 kubelet[2579]: I0416 02:12:50.136431 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d"} err="failed to get container status \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9c40627a17c91648c3ff319e77bf5d5dde0df76011ecf188e08e74898cc9d0d\": not found" Apr 16 02:12:50.137465 kubelet[2579]: I0416 02:12:50.136748 2579 scope.go:122] "RemoveContainer" containerID="61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de" Apr 16 02:12:50.167409 containerd[1463]: time="2026-04-16T02:12:50.163746724Z" level=error msg="ContainerStatus for \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\": not found" Apr 16 02:12:50.175146 kubelet[2579]: E0416 02:12:50.174454 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\": not found" containerID="61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de" Apr 16 02:12:50.175507 kubelet[2579]: I0416 02:12:50.175184 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de"} err="failed to get container status \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\": rpc error: code = NotFound desc = an error occurred when try to find container \"61e07e8e03e23566a52c22ac48cd5ed8e09d26c75f2e5c086816039094a974de\": not found" Apr 16 02:12:50.175507 kubelet[2579]: I0416 02:12:50.175348 2579 scope.go:122] "RemoveContainer" containerID="437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a" Apr 16 02:12:50.213774 containerd[1463]: time="2026-04-16T02:12:50.207003952Z" level=error msg="ContainerStatus for \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\": not found" Apr 16 02:12:50.224208 kubelet[2579]: E0416 02:12:50.223646 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\": not found" containerID="437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a" Apr 16 02:12:50.224208 kubelet[2579]: I0416 02:12:50.224041 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a"} err="failed to get container status \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\": rpc error: code = NotFound desc = an error occurred when try to find container \"437e800e806ec8c68bd744d0a9a40b95e95df9e2f5a8785fe08b1060708a018a\": not found" Apr 16 02:12:50.224208 kubelet[2579]: I0416 02:12:50.224209 2579 scope.go:122] "RemoveContainer" containerID="b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545" Apr 16 02:12:50.229104 containerd[1463]: time="2026-04-16T02:12:50.228715268Z" level=error msg="ContainerStatus for \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\": not found" Apr 16 02:12:50.230126 kubelet[2579]: E0416 02:12:50.229913 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\": not found" containerID="b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545" Apr 16 02:12:50.230642 kubelet[2579]: I0416 02:12:50.230215 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545"} err="failed to get container status \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5c105d66c2b90043e285abeac072ac1b09d2a106dba4027567f361af44a0545\": not found" Apr 16 02:12:50.398067 kubelet[2579]: E0416 02:12:50.397405 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:12:50.401111 kubelet[2579]: E0416 02:12:50.398808 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:12:50.703612 kubelet[2579]: I0416 02:12:50.699382 2579 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3d917cf3-4394-48cb-a90e-a40e12c6e709" path="/var/lib/kubelet/pods/3d917cf3-4394-48cb-a90e-a40e12c6e709/volumes" Apr 16 02:12:51.423130 sshd[4490]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:51.517412 systemd[1]: Started sshd@44-10.0.0.6:22-10.0.0.1:57512.service - OpenSSH per-connection server daemon (10.0.0.1:57512). Apr 16 02:12:51.519157 systemd[1]: sshd@43-10.0.0.6:22-10.0.0.1:58858.service: Deactivated successfully. Apr 16 02:12:51.544392 systemd[1]: session-44.scope: Deactivated successfully. Apr 16 02:12:51.544897 systemd[1]: session-44.scope: Consumed 4.136s CPU time. Apr 16 02:12:51.593170 systemd-logind[1442]: Session 44 logged out. Waiting for processes to exit. Apr 16 02:12:51.648302 systemd-logind[1442]: Removed session 44. Apr 16 02:12:52.155147 sshd[4621]: Accepted publickey for core from 10.0.0.1 port 57512 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:52.211208 sshd[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:52.380797 kubelet[2579]: E0416 02:12:52.376323 2579 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:12:52.398933 systemd-logind[1442]: New session 45 of user core. Apr 16 02:12:52.452401 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 16 02:12:52.752855 kubelet[2579]: E0416 02:12:52.751068 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:12:52.765137 kubelet[2579]: E0416 02:12:52.759422 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:12:52.805303 sshd[4621]: pam_unix(sshd:session): session closed for user core Apr 16 02:12:52.941228 systemd[1]: sshd@44-10.0.0.6:22-10.0.0.1:57512.service: Deactivated successfully. Apr 16 02:12:52.943199 kubelet[2579]: I0416 02:12:52.936613 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/843d7211-6932-4f27-97dc-4fae04b62d94-hubble-tls\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:52.959917 kubelet[2579]: I0416 02:12:52.955713 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-cni-path\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:52.959917 kubelet[2579]: I0416 02:12:52.955959 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-xtables-lock\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:52.991850 systemd[1]: session-45.scope: Deactivated successfully. Apr 16 02:12:53.006109 kubelet[2579]: I0416 02:12:53.003792 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mpd7\" (UniqueName: \"kubernetes.io/projected/843d7211-6932-4f27-97dc-4fae04b62d94-kube-api-access-6mpd7\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.030718 systemd-logind[1442]: Session 45 logged out. Waiting for processes to exit. Apr 16 02:12:53.118806 systemd[1]: Started sshd@45-10.0.0.6:22-10.0.0.1:57520.service - OpenSSH per-connection server daemon (10.0.0.1:57520). Apr 16 02:12:53.138082 systemd-logind[1442]: Removed session 45. Apr 16 02:12:53.255325 systemd[1]: Created slice kubepods-burstable-pod843d7211_6932_4f27_97dc_4fae04b62d94.slice - libcontainer container kubepods-burstable-pod843d7211_6932_4f27_97dc_4fae04b62d94.slice. Apr 16 02:12:53.410744 kubelet[2579]: I0416 02:12:53.244509 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/843d7211-6932-4f27-97dc-4fae04b62d94-clustermesh-secrets\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.434777 kubelet[2579]: I0416 02:12:53.414493 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/843d7211-6932-4f27-97dc-4fae04b62d94-cilium-ipsec-secrets\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.503211 kubelet[2579]: I0416 02:12:53.503083 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-etc-cni-netd\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.564318 kubelet[2579]: I0416 02:12:53.563851 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-host-proc-sys-kernel\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.630809 kubelet[2579]: I0416 02:12:53.629883 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-cilium-run\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.637002 kubelet[2579]: I0416 02:12:53.634667 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-bpf-maps\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.637002 kubelet[2579]: I0416 02:12:53.635323 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-cilium-cgroup\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.637002 kubelet[2579]: I0416 02:12:53.635418 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-hostproc\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.637002 kubelet[2579]: I0416 02:12:53.635436 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-lib-modules\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.637002 kubelet[2579]: I0416 02:12:53.635660 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/843d7211-6932-4f27-97dc-4fae04b62d94-cilium-config-path\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.637002 kubelet[2579]: I0416 02:12:53.635683 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/843d7211-6932-4f27-97dc-4fae04b62d94-host-proc-sys-net\") pod \"cilium-2mhnj\" (UID: \"843d7211-6932-4f27-97dc-4fae04b62d94\") " pod="kube-system/cilium-2mhnj" Apr 16 02:12:53.677419 kubelet[2579]: E0416 02:12:53.675977 2579 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.02s" Apr 16 02:12:53.836877 sshd[4631]: Accepted publickey for core from 10.0.0.1 port 57520 ssh2: RSA SHA256:bV/RKz8AI1uIoM1eji3IUg11uc41Dr/FZPk2lH6ww4M Apr 16 02:12:53.874452 sshd[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:12:54.131934 kubelet[2579]: E0416 02:12:54.129716 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:54.220831 systemd-logind[1442]: New session 46 of user core. Apr 16 02:12:54.253049 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 16 02:12:54.282202 containerd[1463]: time="2026-04-16T02:12:54.280870052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mhnj,Uid:843d7211-6932-4f27-97dc-4fae04b62d94,Namespace:kube-system,Attempt:0,}" Apr 16 02:12:54.500155 kubelet[2579]: E0416 02:12:54.490969 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:12:54.502825 kubelet[2579]: E0416 02:12:54.501204 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:12:55.232081 containerd[1463]: time="2026-04-16T02:12:55.231150534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 02:12:55.232081 containerd[1463]: time="2026-04-16T02:12:55.231723079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 02:12:55.232081 containerd[1463]: time="2026-04-16T02:12:55.231748075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:12:55.236026 containerd[1463]: time="2026-04-16T02:12:55.232060910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 02:12:55.835471 systemd[1]: Started cri-containerd-0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710.scope - libcontainer container 0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710. Apr 16 02:12:56.486727 kubelet[2579]: I0416 02:12:56.485629 2579 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-16T02:12:56Z","lastTransitionTime":"2026-04-16T02:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 16 02:12:56.489393 kubelet[2579]: E0416 02:12:56.489142 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:12:56.494610 kubelet[2579]: E0416 02:12:56.490437 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:12:56.764008 update_engine[1450]: I20260416 02:12:56.760641 1450 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 02:12:56.770454 update_engine[1450]: I20260416 02:12:56.766500 1450 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 02:12:56.781387 update_engine[1450]: I20260416 02:12:56.773178 1450 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 02:12:56.795593 update_engine[1450]: I20260416 02:12:56.792158 1450 omaha_request_params.cc:62] Current group set to lts Apr 16 02:12:56.802732 update_engine[1450]: I20260416 02:12:56.799727 1450 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 02:12:56.802732 update_engine[1450]: I20260416 02:12:56.799909 1450 update_attempter.cc:643] Scheduling an action processor start. Apr 16 02:12:56.802732 update_engine[1450]: I20260416 02:12:56.799934 1450 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 02:12:56.819255 update_engine[1450]: I20260416 02:12:56.809144 1450 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 02:12:56.819255 update_engine[1450]: I20260416 02:12:56.809804 1450 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 02:12:56.819255 update_engine[1450]: I20260416 02:12:56.809823 1450 omaha_request_action.cc:272] Request: Apr 16 02:12:56.819255 update_engine[1450]: Apr 16 02:12:56.819255 update_engine[1450]: Apr 16 02:12:56.819255 update_engine[1450]: Apr 16 02:12:56.819255 update_engine[1450]: Apr 16 02:12:56.819255 update_engine[1450]: Apr 16 02:12:56.819255 update_engine[1450]: Apr 16 02:12:56.819255 update_engine[1450]: Apr 16 02:12:56.819255 update_engine[1450]: Apr 16 02:12:56.819255 update_engine[1450]: I20260416 02:12:56.809831 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:12:56.926417 update_engine[1450]: I20260416 02:12:56.924969 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:12:56.941234 update_engine[1450]: I20260416 02:12:56.939711 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:12:56.949611 update_engine[1450]: E20260416 02:12:56.949337 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:12:56.951906 update_engine[1450]: I20260416 02:12:56.951368 1450 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 02:12:57.140189 locksmithd[1465]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 02:12:57.380189 containerd[1463]: time="2026-04-16T02:12:57.379628622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mhnj,Uid:843d7211-6932-4f27-97dc-4fae04b62d94,Namespace:kube-system,Attempt:0,} returns sandbox id \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\"" Apr 16 02:12:57.396750 kubelet[2579]: E0416 02:12:57.395747 2579 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:12:57.400144 kubelet[2579]: E0416 02:12:57.398436 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:12:57.784265 containerd[1463]: time="2026-04-16T02:12:57.781723878Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 02:12:58.232505 containerd[1463]: time="2026-04-16T02:12:58.232378571Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef38e2763b97ab6991eb444d7b4ca7b041d9f14cd159998df3a40e1c4247f990\"" Apr 16 02:12:58.283011 containerd[1463]: time="2026-04-16T02:12:58.282222292Z" level=info msg="StartContainer for \"ef38e2763b97ab6991eb444d7b4ca7b041d9f14cd159998df3a40e1c4247f990\"" Apr 16 02:12:58.473187 kubelet[2579]: E0416 02:12:58.472703 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:12:58.474677 kubelet[2579]: E0416 02:12:58.473898 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:12:59.109816 systemd[1]: Started cri-containerd-ef38e2763b97ab6991eb444d7b4ca7b041d9f14cd159998df3a40e1c4247f990.scope - libcontainer container ef38e2763b97ab6991eb444d7b4ca7b041d9f14cd159998df3a40e1c4247f990. Apr 16 02:13:00.433885 containerd[1463]: time="2026-04-16T02:13:00.433456938Z" level=info msg="StartContainer for \"ef38e2763b97ab6991eb444d7b4ca7b041d9f14cd159998df3a40e1c4247f990\" returns successfully" Apr 16 02:13:00.525105 systemd[1]: cri-containerd-ef38e2763b97ab6991eb444d7b4ca7b041d9f14cd159998df3a40e1c4247f990.scope: Deactivated successfully. Apr 16 02:13:00.586913 kubelet[2579]: E0416 02:13:00.585986 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:13:00.586913 kubelet[2579]: E0416 02:13:00.586207 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:13:00.946985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef38e2763b97ab6991eb444d7b4ca7b041d9f14cd159998df3a40e1c4247f990-rootfs.mount: Deactivated successfully. Apr 16 02:13:00.958090 containerd[1463]: time="2026-04-16T02:13:00.956615533Z" level=info msg="shim disconnected" id=ef38e2763b97ab6991eb444d7b4ca7b041d9f14cd159998df3a40e1c4247f990 namespace=k8s.io Apr 16 02:13:00.958090 containerd[1463]: time="2026-04-16T02:13:00.957169344Z" level=warning msg="cleaning up after shim disconnected" id=ef38e2763b97ab6991eb444d7b4ca7b041d9f14cd159998df3a40e1c4247f990 namespace=k8s.io Apr 16 02:13:00.958090 containerd[1463]: time="2026-04-16T02:13:00.957181564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:13:01.452043 kubelet[2579]: E0416 02:13:01.446805 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:01.984203 containerd[1463]: time="2026-04-16T02:13:01.976411692Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 02:13:02.225183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128278518.mount: Deactivated successfully. Apr 16 02:13:02.272369 containerd[1463]: time="2026-04-16T02:13:02.271367696Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7484eeb9040dc214c572c8dd1b421d9f42327de64c24b82ec6564f37c2f261b3\"" Apr 16 02:13:02.335481 containerd[1463]: time="2026-04-16T02:13:02.335185231Z" level=info msg="StartContainer for \"7484eeb9040dc214c572c8dd1b421d9f42327de64c24b82ec6564f37c2f261b3\"" Apr 16 02:13:02.420638 kubelet[2579]: E0416 02:13:02.420255 2579 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:02.525071 kubelet[2579]: E0416 02:13:02.516746 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:13:02.535266 kubelet[2579]: E0416 02:13:02.532440 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:13:03.247757 systemd[1]: Started cri-containerd-7484eeb9040dc214c572c8dd1b421d9f42327de64c24b82ec6564f37c2f261b3.scope - libcontainer container 7484eeb9040dc214c572c8dd1b421d9f42327de64c24b82ec6564f37c2f261b3. Apr 16 02:13:04.155580 containerd[1463]: time="2026-04-16T02:13:04.155282690Z" level=info msg="StartContainer for \"7484eeb9040dc214c572c8dd1b421d9f42327de64c24b82ec6564f37c2f261b3\" returns successfully" Apr 16 02:13:04.301886 systemd[1]: cri-containerd-7484eeb9040dc214c572c8dd1b421d9f42327de64c24b82ec6564f37c2f261b3.scope: Deactivated successfully. Apr 16 02:13:04.516089 kubelet[2579]: E0416 02:13:04.509813 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:13:04.528674 kubelet[2579]: E0416 02:13:04.527907 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:13:04.999134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7484eeb9040dc214c572c8dd1b421d9f42327de64c24b82ec6564f37c2f261b3-rootfs.mount: Deactivated successfully. Apr 16 02:13:05.012483 containerd[1463]: time="2026-04-16T02:13:05.011099885Z" level=info msg="shim disconnected" id=7484eeb9040dc214c572c8dd1b421d9f42327de64c24b82ec6564f37c2f261b3 namespace=k8s.io Apr 16 02:13:05.015372 containerd[1463]: time="2026-04-16T02:13:05.014392793Z" level=warning msg="cleaning up after shim disconnected" id=7484eeb9040dc214c572c8dd1b421d9f42327de64c24b82ec6564f37c2f261b3 namespace=k8s.io Apr 16 02:13:05.016127 containerd[1463]: time="2026-04-16T02:13:05.015775051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:13:05.451153 containerd[1463]: time="2026-04-16T02:13:05.450900625Z" level=warning msg="cleanup warnings time=\"2026-04-16T02:13:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 16 02:13:05.615173 kubelet[2579]: E0416 02:13:05.614483 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:05.814743 containerd[1463]: time="2026-04-16T02:13:05.814083680Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 02:13:06.037202 containerd[1463]: time="2026-04-16T02:13:06.036714327Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a002316d2fc28a02bb8b66da9d25c45f622f5b5fef000943a7cb8de639be5ee8\"" Apr 16 02:13:06.077141 containerd[1463]: time="2026-04-16T02:13:06.076056378Z" level=info msg="StartContainer for \"a002316d2fc28a02bb8b66da9d25c45f622f5b5fef000943a7cb8de639be5ee8\"" Apr 16 02:13:06.335916 systemd[1]: Started cri-containerd-a002316d2fc28a02bb8b66da9d25c45f622f5b5fef000943a7cb8de639be5ee8.scope - libcontainer container a002316d2fc28a02bb8b66da9d25c45f622f5b5fef000943a7cb8de639be5ee8. Apr 16 02:13:06.505279 kubelet[2579]: E0416 02:13:06.504937 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:13:06.532869 kubelet[2579]: E0416 02:13:06.532071 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:13:06.761825 update_engine[1450]: I20260416 02:13:06.760331 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:13:06.767463 update_engine[1450]: I20260416 02:13:06.765902 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:13:06.772079 update_engine[1450]: I20260416 02:13:06.771867 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:13:06.781893 update_engine[1450]: E20260416 02:13:06.780181 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:13:06.781893 update_engine[1450]: I20260416 02:13:06.780750 1450 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 02:13:06.994265 systemd[1]: cri-containerd-a002316d2fc28a02bb8b66da9d25c45f622f5b5fef000943a7cb8de639be5ee8.scope: Deactivated successfully. Apr 16 02:13:07.028650 containerd[1463]: time="2026-04-16T02:13:07.027720598Z" level=info msg="StartContainer for \"a002316d2fc28a02bb8b66da9d25c45f622f5b5fef000943a7cb8de639be5ee8\" returns successfully" Apr 16 02:13:07.405108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a002316d2fc28a02bb8b66da9d25c45f622f5b5fef000943a7cb8de639be5ee8-rootfs.mount: Deactivated successfully. Apr 16 02:13:07.423817 containerd[1463]: time="2026-04-16T02:13:07.421068475Z" level=info msg="shim disconnected" id=a002316d2fc28a02bb8b66da9d25c45f622f5b5fef000943a7cb8de639be5ee8 namespace=k8s.io Apr 16 02:13:07.423817 containerd[1463]: time="2026-04-16T02:13:07.422505253Z" level=warning msg="cleaning up after shim disconnected" id=a002316d2fc28a02bb8b66da9d25c45f622f5b5fef000943a7cb8de639be5ee8 namespace=k8s.io Apr 16 02:13:07.423817 containerd[1463]: time="2026-04-16T02:13:07.422714953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:13:07.449774 kubelet[2579]: E0416 02:13:07.449259 2579 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:08.397477 kubelet[2579]: E0416 02:13:08.396317 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:08.627959 kubelet[2579]: E0416 02:13:08.626076 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:13:08.737114 kubelet[2579]: E0416 02:13:08.626920 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:13:09.055917 containerd[1463]: time="2026-04-16T02:13:09.045294265Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 02:13:09.312007 containerd[1463]: time="2026-04-16T02:13:09.311247723Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"65819899cc5876e3a9815b462181adbf178da0a486a263e0b474c6f4f9fad690\"" Apr 16 02:13:09.434750 containerd[1463]: time="2026-04-16T02:13:09.433244944Z" level=info msg="StartContainer for \"65819899cc5876e3a9815b462181adbf178da0a486a263e0b474c6f4f9fad690\"" Apr 16 02:13:09.754903 systemd[1]: Started cri-containerd-65819899cc5876e3a9815b462181adbf178da0a486a263e0b474c6f4f9fad690.scope - libcontainer container 65819899cc5876e3a9815b462181adbf178da0a486a263e0b474c6f4f9fad690. Apr 16 02:13:10.043419 systemd[1]: cri-containerd-65819899cc5876e3a9815b462181adbf178da0a486a263e0b474c6f4f9fad690.scope: Deactivated successfully. Apr 16 02:13:10.132986 containerd[1463]: time="2026-04-16T02:13:10.131325911Z" level=info msg="StartContainer for \"65819899cc5876e3a9815b462181adbf178da0a486a263e0b474c6f4f9fad690\" returns successfully" Apr 16 02:13:10.529865 kubelet[2579]: E0416 02:13:10.524845 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:13:10.573637 kubelet[2579]: E0416 02:13:10.566063 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:13:10.598146 kubelet[2579]: E0416 02:13:10.597905 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:10.642762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65819899cc5876e3a9815b462181adbf178da0a486a263e0b474c6f4f9fad690-rootfs.mount: Deactivated successfully. Apr 16 02:13:10.807953 containerd[1463]: time="2026-04-16T02:13:10.799333412Z" level=info msg="shim disconnected" id=65819899cc5876e3a9815b462181adbf178da0a486a263e0b474c6f4f9fad690 namespace=k8s.io Apr 16 02:13:10.807953 containerd[1463]: time="2026-04-16T02:13:10.804358602Z" level=warning msg="cleaning up after shim disconnected" id=65819899cc5876e3a9815b462181adbf178da0a486a263e0b474c6f4f9fad690 namespace=k8s.io Apr 16 02:13:10.807953 containerd[1463]: time="2026-04-16T02:13:10.805273917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 02:13:11.347859 containerd[1463]: time="2026-04-16T02:13:11.345003767Z" level=warning msg="cleanup warnings time=\"2026-04-16T02:13:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 16 02:13:11.415308 kubelet[2579]: E0416 02:13:11.411438 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:11.621791 containerd[1463]: time="2026-04-16T02:13:11.618683456Z" level=info msg="StopPodSandbox for \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\"" Apr 16 02:13:11.621791 containerd[1463]: time="2026-04-16T02:13:11.619114318Z" level=info msg="TearDown network for sandbox \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\" successfully" Apr 16 02:13:11.621791 containerd[1463]: time="2026-04-16T02:13:11.619135368Z" level=info msg="StopPodSandbox for \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\" returns successfully" Apr 16 02:13:11.772234 containerd[1463]: time="2026-04-16T02:13:11.771837710Z" level=info msg="RemovePodSandbox for \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\"" Apr 16 02:13:11.776734 containerd[1463]: time="2026-04-16T02:13:11.775959548Z" level=info msg="Forcibly stopping sandbox \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\"" Apr 16 02:13:11.777761 containerd[1463]: time="2026-04-16T02:13:11.777444577Z" level=info msg="TearDown network for sandbox \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\" successfully" Apr 16 02:13:11.841272 containerd[1463]: time="2026-04-16T02:13:11.840511014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 02:13:11.863611 containerd[1463]: time="2026-04-16T02:13:11.861730765Z" level=info msg="RemovePodSandbox \"dad7e5724c30e0def05733d7186b4d0e0db8026df21c8c8ac69d637b9f986d2b\" returns successfully" Apr 16 02:13:11.881976 containerd[1463]: time="2026-04-16T02:13:11.880763368Z" level=info msg="StopPodSandbox for \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\"" Apr 16 02:13:11.883587 containerd[1463]: time="2026-04-16T02:13:11.883244388Z" level=info msg="TearDown network for sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" successfully" Apr 16 02:13:11.883693 containerd[1463]: time="2026-04-16T02:13:11.883620923Z" level=info msg="StopPodSandbox for \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" returns successfully" Apr 16 02:13:11.896919 containerd[1463]: time="2026-04-16T02:13:11.896617308Z" level=info msg="RemovePodSandbox for \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\"" Apr 16 02:13:11.896919 containerd[1463]: time="2026-04-16T02:13:11.896775980Z" level=info msg="Forcibly stopping sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\"" Apr 16 02:13:11.897724 containerd[1463]: time="2026-04-16T02:13:11.897296460Z" level=info msg="TearDown network for sandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" successfully" Apr 16 02:13:12.010964 containerd[1463]: time="2026-04-16T02:13:12.010660502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 02:13:12.010964 containerd[1463]: time="2026-04-16T02:13:12.010957433Z" level=info msg="RemovePodSandbox \"61c3681670c1bf02cc31e4c66ce94b7613816b1778c6f418d7cad5359a3ef217\" returns successfully" Apr 16 02:13:12.015372 containerd[1463]: time="2026-04-16T02:13:12.014800204Z" level=info msg="StopPodSandbox for \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\"" Apr 16 02:13:12.017862 containerd[1463]: time="2026-04-16T02:13:12.017681302Z" level=error msg="StopPodSandbox for \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\" failed" error="failed to destroy network for sandbox \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\": cni plugin not initialized" Apr 16 02:13:12.019614 kubelet[2579]: E0416 02:13:12.019195 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\": cni plugin not initialized" podSandboxID="320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673" Apr 16 02:13:12.020084 kubelet[2579]: E0416 02:13:12.019663 2579 kuberuntime_gc.go:182] "Failed to stop sandbox before removing" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\": cni plugin not initialized" sandboxID="320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673" Apr 16 02:13:12.020835 containerd[1463]: time="2026-04-16T02:13:12.020653954Z" level=info msg="StopPodSandbox for \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\"" Apr 16 02:13:12.020970 containerd[1463]: time="2026-04-16T02:13:12.020879986Z" level=error msg="StopPodSandbox for \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\" failed" error="failed to destroy network for sandbox \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\": cni plugin not initialized" Apr 16 02:13:12.021172 kubelet[2579]: E0416 02:13:12.021104 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\": cni plugin not initialized" podSandboxID="5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545" Apr 16 02:13:12.021310 kubelet[2579]: E0416 02:13:12.021174 2579 kuberuntime_gc.go:182] "Failed to stop sandbox before removing" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\": cni plugin not initialized" sandboxID="5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545" Apr 16 02:13:12.497743 kubelet[2579]: E0416 02:13:12.496878 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:13:12.497743 kubelet[2579]: E0416 02:13:12.497466 2579 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 02:13:12.498480 kubelet[2579]: E0416 02:13:12.498275 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:13:12.913472 kubelet[2579]: E0416 02:13:12.912898 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:13.316127 containerd[1463]: time="2026-04-16T02:13:13.314482603Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 02:13:13.574629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501325333.mount: Deactivated successfully. Apr 16 02:13:13.724885 containerd[1463]: time="2026-04-16T02:13:13.724217268Z" level=info msg="CreateContainer within sandbox \"0befc8d14325518692d328b1c4cc71f03c90019c32ee37a3b978e58a8309a710\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cc936b564c7e798308059b2eb8d643e63beb6a4f9653ec093dd7f49b918ee789\"" Apr 16 02:13:13.781158 containerd[1463]: time="2026-04-16T02:13:13.779910060Z" level=info msg="StartContainer for \"cc936b564c7e798308059b2eb8d643e63beb6a4f9653ec093dd7f49b918ee789\"" Apr 16 02:13:14.396618 systemd[1]: Started cri-containerd-cc936b564c7e798308059b2eb8d643e63beb6a4f9653ec093dd7f49b918ee789.scope - libcontainer container cc936b564c7e798308059b2eb8d643e63beb6a4f9653ec093dd7f49b918ee789. Apr 16 02:13:14.544866 kubelet[2579]: E0416 02:13:14.543739 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:13:14.551452 kubelet[2579]: E0416 02:13:14.550843 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:13:14.698200 containerd[1463]: time="2026-04-16T02:13:14.694084498Z" level=info msg="StartContainer for \"cc936b564c7e798308059b2eb8d643e63beb6a4f9653ec093dd7f49b918ee789\" returns successfully" Apr 16 02:13:15.877253 kubelet[2579]: E0416 02:13:15.876931 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:16.460977 kubelet[2579]: E0416 02:13:16.459659 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-27d9x" podUID="5933f2cb-ae5a-47e4-91d4-0d8be9480079" Apr 16 02:13:16.490913 kubelet[2579]: E0416 02:13:16.490328 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-ss8dh" podUID="fd22b9d8-1786-4923-96f3-3db07d47e21f" Apr 16 02:13:16.763208 update_engine[1450]: I20260416 02:13:16.761178 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:13:16.768844 update_engine[1450]: I20260416 02:13:16.766251 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:13:16.770788 update_engine[1450]: I20260416 02:13:16.770758 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:13:16.778005 update_engine[1450]: E20260416 02:13:16.775971 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:13:16.784820 update_engine[1450]: I20260416 02:13:16.781973 1450 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 02:13:16.821755 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 16 02:13:17.228365 kubelet[2579]: E0416 02:13:17.228026 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:18.412689 kubelet[2579]: E0416 02:13:18.412306 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:18.606142 kubelet[2579]: E0416 02:13:18.602347 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:18.616366 containerd[1463]: time="2026-04-16T02:13:18.606612925Z" level=info msg="StopPodSandbox for \"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\"" Apr 16 02:13:18.616366 containerd[1463]: time="2026-04-16T02:13:18.610078617Z" level=info msg="StopPodSandbox for \"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\"" Apr 16 02:13:18.616366 containerd[1463]: time="2026-04-16T02:13:18.610872468Z" level=info msg="Ensure that sandbox e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56 in task-service has been cleanup successfully" Apr 16 02:13:18.616366 containerd[1463]: time="2026-04-16T02:13:18.611750218Z" level=info msg="Ensure that sandbox 9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578 in task-service has been cleanup successfully" Apr 16 02:13:18.621316 kubelet[2579]: E0416 02:13:18.620048 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:21.230210 systemd[1]: run-containerd-runc-k8s.io-cc936b564c7e798308059b2eb8d643e63beb6a4f9653ec093dd7f49b918ee789-runc.fJnKOF.mount: Deactivated successfully. Apr 16 02:13:24.129057 kubelet[2579]: E0416 02:13:24.127616 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:26.766748 update_engine[1450]: I20260416 02:13:26.766180 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:13:26.823668 update_engine[1450]: I20260416 02:13:26.822111 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:13:26.823668 update_engine[1450]: I20260416 02:13:26.823076 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:13:26.843706 update_engine[1450]: E20260416 02:13:26.840420 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:13:26.846416 update_engine[1450]: I20260416 02:13:26.846050 1450 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 02:13:26.846416 update_engine[1450]: I20260416 02:13:26.846410 1450 omaha_request_action.cc:617] Omaha request response: Apr 16 02:13:26.847963 update_engine[1450]: E20260416 02:13:26.847194 1450 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 02:13:26.847963 update_engine[1450]: I20260416 02:13:26.847502 1450 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 02:13:26.847963 update_engine[1450]: I20260416 02:13:26.847509 1450 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 02:13:26.847963 update_engine[1450]: I20260416 02:13:26.847574 1450 update_attempter.cc:306] Processing Done. Apr 16 02:13:26.847963 update_engine[1450]: E20260416 02:13:26.847589 1450 update_attempter.cc:619] Update failed. Apr 16 02:13:26.847963 update_engine[1450]: I20260416 02:13:26.847595 1450 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 02:13:26.847963 update_engine[1450]: I20260416 02:13:26.847600 1450 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 02:13:26.847963 update_engine[1450]: I20260416 02:13:26.847604 1450 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 02:13:26.848379 update_engine[1450]: I20260416 02:13:26.847990 1450 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 02:13:26.848379 update_engine[1450]: I20260416 02:13:26.848016 1450 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 02:13:26.848379 update_engine[1450]: I20260416 02:13:26.848019 1450 omaha_request_action.cc:272] Request: Apr 16 02:13:26.848379 update_engine[1450]: Apr 16 02:13:26.848379 update_engine[1450]: Apr 16 02:13:26.848379 update_engine[1450]: Apr 16 02:13:26.848379 update_engine[1450]: Apr 16 02:13:26.848379 update_engine[1450]: Apr 16 02:13:26.848379 update_engine[1450]: Apr 16 02:13:26.848379 update_engine[1450]: I20260416 02:13:26.848024 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 02:13:26.848379 update_engine[1450]: I20260416 02:13:26.848367 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 02:13:26.862418 update_engine[1450]: I20260416 02:13:26.850878 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 02:13:26.862418 update_engine[1450]: E20260416 02:13:26.857316 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 02:13:26.862418 update_engine[1450]: I20260416 02:13:26.857506 1450 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 02:13:26.862418 update_engine[1450]: I20260416 02:13:26.857606 1450 omaha_request_action.cc:617] Omaha request response: Apr 16 02:13:26.862418 update_engine[1450]: I20260416 02:13:26.857614 1450 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 02:13:26.862418 update_engine[1450]: I20260416 02:13:26.857618 1450 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 02:13:26.862418 update_engine[1450]: I20260416 02:13:26.857622 1450 update_attempter.cc:306] Processing Done. Apr 16 02:13:26.862418 update_engine[1450]: I20260416 02:13:26.857628 1450 update_attempter.cc:310] Error event sent. Apr 16 02:13:26.862418 update_engine[1450]: I20260416 02:13:26.857638 1450 update_check_scheduler.cc:74] Next update check in 48m14s Apr 16 02:13:26.876282 locksmithd[1465]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 02:13:26.876282 locksmithd[1465]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 02:13:27.418613 systemd[1]: run-containerd-runc-k8s.io-cc936b564c7e798308059b2eb8d643e63beb6a4f9653ec093dd7f49b918ee789-runc.v330I7.mount: Deactivated successfully. Apr 16 02:13:35.298856 systemd-networkd[1292]: lxc_health: Link UP Apr 16 02:13:35.308386 systemd-networkd[1292]: lxc_health: Gained carrier Apr 16 02:13:36.214129 kubelet[2579]: E0416 02:13:36.213745 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:36.561202 systemd-networkd[1292]: lxc_health: Gained IPv6LL Apr 16 02:13:36.593811 kubelet[2579]: E0416 02:13:36.591805 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:36.803118 kubelet[2579]: I0416 02:13:36.802370 2579 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-2mhnj" podStartSLOduration=45.802146223 podStartE2EDuration="45.802146223s" podCreationTimestamp="2026-04-16 02:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:13:16.073356585 +0000 UTC m=+308.083127646" watchObservedRunningTime="2026-04-16 02:13:36.802146223 +0000 UTC m=+328.811917268" Apr 16 02:13:37.233809 containerd[1463]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:13:37.283897 systemd[1]: run-netns-cni\x2d4287db4a\x2d5899\x2dc6e6\x2d1b5f\x2d06d95286ca31.mount: Deactivated successfully. Apr 16 02:13:37.306591 containerd[1463]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:13:37.313752 containerd[1463]: time="2026-04-16T02:13:37.312461340Z" level=info msg="TearDown network for sandbox \"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\" successfully" Apr 16 02:13:37.315234 containerd[1463]: time="2026-04-16T02:13:37.315104924Z" level=info msg="StopPodSandbox for \"e8b5a32d98f219c73adae94a89a88abe22bac6a3ee8be54986216359d192bf56\" returns successfully" Apr 16 02:13:37.347045 containerd[1463]: time="2026-04-16T02:13:37.326842275Z" level=info msg="TearDown network for sandbox \"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\" successfully" Apr 16 02:13:37.437705 containerd[1463]: time="2026-04-16T02:13:37.347343984Z" level=info msg="StopPodSandbox for \"9dae3748f3cfbbc558d9d426e1d6364de9db33f2f8346c7069b830e35d3a5578\" returns successfully" Apr 16 02:13:37.437705 containerd[1463]: time="2026-04-16T02:13:37.359597162Z" level=info msg="StopPodSandbox for \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\"" Apr 16 02:13:37.428918 systemd[1]: run-netns-cni\x2df81e1f66\x2df53c\x2df78a\x2d972d\x2dcf3b3db5b5d0.mount: Deactivated successfully. Apr 16 02:13:37.507913 containerd[1463]: time="2026-04-16T02:13:37.504354296Z" level=info msg="StopPodSandbox for \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\"" Apr 16 02:13:38.010391 containerd[1463]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:13:38.010391 containerd[1463]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Apr 16 02:13:38.019384 containerd[1463]: time="2026-04-16T02:13:38.016994952Z" level=info msg="TearDown network for sandbox \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\" successfully" Apr 16 02:13:38.019384 containerd[1463]: time="2026-04-16T02:13:38.017177090Z" level=info msg="StopPodSandbox for \"320ec9797e26ababf66e108d22a2b17e3bbb4f30f728d390c943f6be137ae673\" returns successfully" Apr 16 02:13:38.033620 kubelet[2579]: E0416 02:13:38.033236 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:38.105230 containerd[1463]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 16 02:13:38.105230 containerd[1463]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Apr 16 02:13:38.105230 containerd[1463]: time="2026-04-16T02:13:38.104329705Z" level=info msg="TearDown network for sandbox \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\" successfully" Apr 16 02:13:38.105230 containerd[1463]: time="2026-04-16T02:13:38.104629745Z" level=info msg="StopPodSandbox for \"5ab9d64e5845e4bf6443b363a3afd834a3e40307fe2c128bbf7995de35d2c545\" returns successfully" Apr 16 02:13:38.111829 containerd[1463]: time="2026-04-16T02:13:38.105456306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ss8dh,Uid:fd22b9d8-1786-4923-96f3-3db07d47e21f,Namespace:kube-system,Attempt:2,}" Apr 16 02:13:38.142355 kubelet[2579]: E0416 02:13:38.140095 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:38.241908 containerd[1463]: time="2026-04-16T02:13:38.230963259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-27d9x,Uid:5933f2cb-ae5a-47e4-91d4-0d8be9480079,Namespace:kube-system,Attempt:2,}" Apr 16 02:13:39.017352 systemd-networkd[1292]: lxce598d3314aff: Link UP Apr 16 02:13:39.061864 kernel: eth0: renamed from tmp7c374 Apr 16 02:13:39.138192 systemd-networkd[1292]: lxce598d3314aff: Gained carrier Apr 16 02:13:39.277122 systemd-networkd[1292]: lxc5555a866f72a: Link UP Apr 16 02:13:39.419885 kernel: eth0: renamed from tmp76830 Apr 16 02:13:39.453038 systemd-networkd[1292]: lxc5555a866f72a: Gained carrier Apr 16 02:13:39.492697 kubelet[2579]: E0416 02:13:39.489227 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:13:40.336821 systemd-networkd[1292]: lxce598d3314aff: Gained IPv6LL Apr 16 02:13:40.912201 systemd-networkd[1292]: lxc5555a866f72a: Gained IPv6LL Apr 16 02:13:41.399205 systemd[1]: run-containerd-runc-k8s.io-cc936b564c7e798308059b2eb8d643e63beb6a4f9653ec093dd7f49b918ee789-runc.LlDzUO.mount: Deactivated successfully. Apr 16 02:13:42.371019 sshd[4631]: pam_unix(sshd:session): session closed for user core Apr 16 02:13:42.391410 systemd[1]: sshd@45-10.0.0.6:22-10.0.0.1:57520.service: Deactivated successfully. Apr 16 02:13:42.442800 systemd[1]: session-46.scope: Deactivated successfully. Apr 16 02:13:42.443327 systemd[1]: session-46.scope: Consumed 7.093s CPU time. Apr 16 02:13:42.519968 systemd-logind[1442]: Session 46 logged out. Waiting for processes to exit. Apr 16 02:13:42.527203 systemd-logind[1442]: Removed session 46.